00:00:00.000 Started by upstream project "autotest-per-patch" build number 127200 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.201 Using shallow fetch with depth 1 00:00:00.201 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.201 > git --version # timeout=10 00:00:00.229 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.197 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.207 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.216 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.216 > git config core.sparsecheckout # timeout=10 00:00:06.226 > git read-tree -mu HEAD # timeout=10 00:00:06.241 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.257 Commit message: "packer: Add bios builder" 00:00:06.257 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.342 [Pipeline] Start of Pipeline 00:00:06.395 [Pipeline] library 00:00:06.396 Loading library shm_lib@master 00:00:06.397 Library shm_lib@master is cached. Copying from home. 00:00:06.414 [Pipeline] node 00:00:06.423 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.425 [Pipeline] { 00:00:06.435 [Pipeline] catchError 00:00:06.436 [Pipeline] { 00:00:06.448 [Pipeline] wrap 00:00:06.456 [Pipeline] { 00:00:06.464 [Pipeline] stage 00:00:06.466 [Pipeline] { (Prologue) 00:00:06.480 [Pipeline] echo 00:00:06.481 Node: VM-host-SM16 00:00:06.486 [Pipeline] cleanWs 00:00:06.494 [WS-CLEANUP] Deleting project workspace... 00:00:06.494 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.501 [WS-CLEANUP] done 00:00:06.666 [Pipeline] setCustomBuildProperty 00:00:06.725 [Pipeline] httpRequest 00:00:06.751 [Pipeline] echo 00:00:06.752 Sorcerer 10.211.164.101 is alive 00:00:06.758 [Pipeline] httpRequest 00:00:06.761 HttpMethod: GET 00:00:06.762 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.762 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.773 Response Code: HTTP/1.1 200 OK 00:00:06.773 Success: Status code 200 is in the accepted range: 200,404 00:00:06.774 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.350 [Pipeline] sh 00:00:10.631 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.652 [Pipeline] httpRequest 00:00:10.671 [Pipeline] echo 00:00:10.673 Sorcerer 10.211.164.101 is alive 00:00:10.683 [Pipeline] httpRequest 00:00:10.688 HttpMethod: GET 00:00:10.688 URL: http://10.211.164.101/packages/spdk_5c22a76d6a43def9b22c18dd5bc903a6b33d5f72.tar.gz 00:00:10.689 Sending request to url: http://10.211.164.101/packages/spdk_5c22a76d6a43def9b22c18dd5bc903a6b33d5f72.tar.gz 00:00:10.711 Response Code: HTTP/1.1 200 OK 00:00:10.711 Success: Status code 200 is in the accepted range: 200,404 00:00:10.712 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_5c22a76d6a43def9b22c18dd5bc903a6b33d5f72.tar.gz 00:00:57.389 [Pipeline] sh 00:00:57.669 + tar --no-same-owner -xf spdk_5c22a76d6a43def9b22c18dd5bc903a6b33d5f72.tar.gz 00:01:00.236 [Pipeline] sh 00:01:00.515 + git -C spdk log --oneline -n5 00:01:00.515 5c22a76d6 sock/uring: support src_{addr,port} in connect() 00:01:00.515 546346ebd sock/posix: support src_{addr,port} in connect() 00:01:00.515 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:00.515 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:00.515 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:00.533 [Pipeline] writeFile 00:01:00.553 [Pipeline] sh 00:01:00.832 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:00.843 [Pipeline] sh 00:01:01.122 + cat autorun-spdk.conf 00:01:01.122 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.122 SPDK_TEST_NVMF=1 00:01:01.122 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.122 SPDK_TEST_URING=1 00:01:01.122 SPDK_TEST_USDT=1 00:01:01.122 SPDK_RUN_UBSAN=1 00:01:01.122 NET_TYPE=virt 00:01:01.122 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.127 RUN_NIGHTLY=0 00:01:01.131 [Pipeline] } 00:01:01.148 [Pipeline] // stage 00:01:01.161 [Pipeline] stage 00:01:01.163 [Pipeline] { (Run VM) 00:01:01.174 [Pipeline] sh 00:01:01.447 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:01.448 + echo 'Start stage prepare_nvme.sh' 00:01:01.448 Start stage prepare_nvme.sh 00:01:01.448 + [[ -n 3 ]] 00:01:01.448 + disk_prefix=ex3 00:01:01.448 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:01.448 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:01.448 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:01.448 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.448 ++ SPDK_TEST_NVMF=1 00:01:01.448 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.448 ++ SPDK_TEST_URING=1 00:01:01.448 ++ SPDK_TEST_USDT=1 00:01:01.448 ++ SPDK_RUN_UBSAN=1 00:01:01.448 ++ NET_TYPE=virt 00:01:01.448 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.448 ++ RUN_NIGHTLY=0 00:01:01.448 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:01.448 + nvme_files=() 00:01:01.448 + declare -A nvme_files 00:01:01.448 + backend_dir=/var/lib/libvirt/images/backends 00:01:01.448 + nvme_files['nvme.img']=5G 00:01:01.448 + nvme_files['nvme-cmb.img']=5G 00:01:01.448 + nvme_files['nvme-multi0.img']=4G 00:01:01.448 + nvme_files['nvme-multi1.img']=4G 00:01:01.448 + nvme_files['nvme-multi2.img']=4G 00:01:01.448 + nvme_files['nvme-openstack.img']=8G 00:01:01.448 + nvme_files['nvme-zns.img']=5G 00:01:01.448 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:01.448 + (( SPDK_TEST_FTL == 1 )) 00:01:01.448 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:01.448 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.448 + for nvme in "${!nvme_files[@]}" 00:01:01.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:01.448 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.448 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:01.448 + echo 'End stage prepare_nvme.sh' 00:01:01.448 End stage prepare_nvme.sh 00:01:01.459 [Pipeline] sh 00:01:01.736 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.736 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:01.736 00:01:01.736 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:01.736 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:01.736 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:01.736 HELP=0 00:01:01.736 DRY_RUN=0 00:01:01.736 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:01.736 NVME_DISKS_TYPE=nvme,nvme, 00:01:01.736 NVME_AUTO_CREATE=0 00:01:01.736 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:01.736 NVME_CMB=,, 00:01:01.736 NVME_PMR=,, 00:01:01.736 NVME_ZNS=,, 00:01:01.736 NVME_MS=,, 00:01:01.736 NVME_FDP=,, 00:01:01.736 SPDK_VAGRANT_DISTRO=fedora38 00:01:01.736 SPDK_VAGRANT_VMCPU=10 00:01:01.736 SPDK_VAGRANT_VMRAM=12288 00:01:01.736 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.736 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:01.736 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.736 SPDK_OPENSTACK_NETWORK=0 00:01:01.736 VAGRANT_PACKAGE_BOX=0 00:01:01.736 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.736 FORCE_DISTRO=true 00:01:01.736 VAGRANT_BOX_VERSION= 00:01:01.736 EXTRA_VAGRANTFILES= 00:01:01.736 NIC_MODEL=e1000 00:01:01.736 00:01:01.736 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:01.736 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:05.042 Bringing machine 'default' up with 'libvirt' provider... 00:01:05.042 ==> default: Creating image (snapshot of base box volume). 00:01:05.300 ==> default: Creating domain with the following settings... 00:01:05.300 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721978790_8fc436c269cec6d51a1a 00:01:05.300 ==> default: -- Domain type: kvm 00:01:05.300 ==> default: -- Cpus: 10 00:01:05.300 ==> default: -- Feature: acpi 00:01:05.300 ==> default: -- Feature: apic 00:01:05.300 ==> default: -- Feature: pae 00:01:05.300 ==> default: -- Memory: 12288M 00:01:05.300 ==> default: -- Memory Backing: hugepages: 00:01:05.300 ==> default: -- Management MAC: 00:01:05.300 ==> default: -- Loader: 00:01:05.300 ==> default: -- Nvram: 00:01:05.300 ==> default: -- Base box: spdk/fedora38 00:01:05.300 ==> default: -- Storage pool: default 00:01:05.300 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721978790_8fc436c269cec6d51a1a.img (20G) 00:01:05.300 ==> default: -- Volume Cache: default 00:01:05.300 ==> default: -- Kernel: 00:01:05.300 ==> default: -- Initrd: 00:01:05.300 ==> default: -- Graphics Type: vnc 00:01:05.300 ==> default: -- Graphics Port: -1 00:01:05.300 ==> default: -- Graphics IP: 127.0.0.1 00:01:05.300 ==> default: -- Graphics Password: Not defined 00:01:05.300 ==> default: -- Video Type: cirrus 00:01:05.300 ==> default: -- Video VRAM: 9216 00:01:05.300 ==> default: -- Sound Type: 00:01:05.300 ==> default: -- Keymap: en-us 00:01:05.300 ==> default: -- TPM Path: 00:01:05.300 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:05.300 ==> default: -- Command line args: 00:01:05.300 ==> default: -> value=-device, 00:01:05.300 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:05.300 ==> default: -> value=-drive, 00:01:05.300 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:05.300 ==> default: -> value=-device, 00:01:05.300 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.300 ==> default: -> value=-device, 00:01:05.300 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:05.300 ==> default: -> value=-drive, 00:01:05.300 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:05.300 ==> default: -> value=-device, 00:01:05.300 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.300 ==> default: -> value=-drive, 00:01:05.300 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:05.300 ==> default: -> value=-device, 00:01:05.300 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.300 ==> default: -> value=-drive, 00:01:05.300 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:05.300 ==> default: -> value=-device, 00:01:05.300 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.300 ==> default: Creating shared folders metadata... 00:01:05.300 ==> default: Starting domain. 00:01:07.208 ==> default: Waiting for domain to get an IP address... 00:01:22.083 ==> default: Waiting for SSH to become available... 00:01:23.981 ==> default: Configuring and enabling network interfaces... 00:01:29.281 default: SSH address: 192.168.121.40:22 00:01:29.281 default: SSH username: vagrant 00:01:29.281 default: SSH auth method: private key 00:01:30.669 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.781 ==> default: Mounting SSHFS shared folder... 00:01:40.683 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:40.684 ==> default: Checking Mount.. 00:01:41.620 ==> default: Folder Successfully Mounted! 00:01:41.620 ==> default: Running provisioner: file... 00:01:42.556 default: ~/.gitconfig => .gitconfig 00:01:42.814 00:01:42.814 SUCCESS! 00:01:42.814 00:01:42.814 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:42.814 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:42.814 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:42.814 00:01:42.823 [Pipeline] } 00:01:42.841 [Pipeline] // stage 00:01:42.851 [Pipeline] dir 00:01:42.851 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:42.853 [Pipeline] { 00:01:42.868 [Pipeline] catchError 00:01:42.870 [Pipeline] { 00:01:42.885 [Pipeline] sh 00:01:43.165 + vagrant ssh-config --host vagrant 00:01:43.165 + sed -ne /^Host/,$p 00:01:43.165 + tee ssh_conf 00:01:46.463 Host vagrant 00:01:46.463 HostName 192.168.121.40 00:01:46.463 User vagrant 00:01:46.463 Port 22 00:01:46.463 UserKnownHostsFile /dev/null 00:01:46.463 StrictHostKeyChecking no 00:01:46.463 PasswordAuthentication no 00:01:46.463 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:46.463 IdentitiesOnly yes 00:01:46.463 LogLevel FATAL 00:01:46.463 ForwardAgent yes 00:01:46.463 ForwardX11 yes 00:01:46.463 00:01:46.506 [Pipeline] withEnv 00:01:46.509 [Pipeline] { 00:01:46.525 [Pipeline] sh 00:01:46.804 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:46.804 source /etc/os-release 00:01:46.804 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.804 # Minimal, systemd-like check. 00:01:46.804 if [[ -e /.dockerenv ]]; then 00:01:46.804 # Clear garbage from the node's name: 00:01:46.804 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.804 # $HOSTNAME is the actual container id 00:01:46.804 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.804 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.804 # We can assume this is a mount from a host where container is running, 00:01:46.804 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.804 container="$(< /etc/hostname) ($agent)" 00:01:46.804 else 00:01:46.804 # Fallback 00:01:46.804 container=$agent 00:01:46.804 fi 00:01:46.804 fi 00:01:46.805 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.805 00:01:47.074 [Pipeline] } 00:01:47.095 [Pipeline] // withEnv 00:01:47.104 [Pipeline] setCustomBuildProperty 00:01:47.121 [Pipeline] stage 00:01:47.123 [Pipeline] { (Tests) 00:01:47.143 [Pipeline] sh 00:01:47.421 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.716 [Pipeline] sh 00:01:47.995 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.266 [Pipeline] timeout 00:01:48.266 Timeout set to expire in 30 min 00:01:48.268 [Pipeline] { 00:01:48.284 [Pipeline] sh 00:01:48.562 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.128 HEAD is now at 5c22a76d6 sock/uring: support src_{addr,port} in connect() 00:01:49.141 [Pipeline] sh 00:01:49.419 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:49.691 [Pipeline] sh 00:01:49.969 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:50.243 [Pipeline] sh 00:01:50.521 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:50.521 ++ readlink -f spdk_repo 00:01:50.521 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:50.521 + [[ -n /home/vagrant/spdk_repo ]] 00:01:50.779 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:50.779 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:50.779 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:50.779 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:50.779 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:50.779 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:50.779 + cd /home/vagrant/spdk_repo 00:01:50.779 + source /etc/os-release 00:01:50.779 ++ NAME='Fedora Linux' 00:01:50.779 ++ VERSION='38 (Cloud Edition)' 00:01:50.779 ++ ID=fedora 00:01:50.779 ++ VERSION_ID=38 00:01:50.779 ++ VERSION_CODENAME= 00:01:50.779 ++ PLATFORM_ID=platform:f38 00:01:50.779 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:50.779 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.779 ++ LOGO=fedora-logo-icon 00:01:50.779 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:50.779 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.779 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:50.779 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.779 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.779 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.779 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:50.779 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.779 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:50.779 ++ SUPPORT_END=2024-05-14 00:01:50.779 ++ VARIANT='Cloud Edition' 00:01:50.779 ++ VARIANT_ID=cloud 00:01:50.779 + uname -a 00:01:50.779 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:50.779 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:51.038 Hugepages 00:01:51.038 node hugesize free / total 00:01:51.038 node0 1048576kB 0 / 0 00:01:51.038 node0 2048kB 0 / 0 00:01:51.038 00:01:51.038 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.296 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:51.296 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:51.296 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:51.296 + rm -f /tmp/spdk-ld-path 00:01:51.296 + source autorun-spdk.conf 00:01:51.296 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.296 ++ SPDK_TEST_NVMF=1 00:01:51.296 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.296 ++ SPDK_TEST_URING=1 00:01:51.296 ++ SPDK_TEST_USDT=1 00:01:51.296 ++ SPDK_RUN_UBSAN=1 00:01:51.296 ++ NET_TYPE=virt 00:01:51.296 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.296 ++ RUN_NIGHTLY=0 00:01:51.296 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.296 + [[ -n '' ]] 00:01:51.296 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:51.296 + for M in /var/spdk/build-*-manifest.txt 00:01:51.296 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.296 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.296 + for M in /var/spdk/build-*-manifest.txt 00:01:51.296 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.296 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.296 ++ uname 00:01:51.296 + [[ Linux == \L\i\n\u\x ]] 00:01:51.296 + sudo dmesg -T 00:01:51.296 + sudo dmesg --clear 00:01:51.296 + dmesg_pid=5267 00:01:51.296 + sudo dmesg -Tw 00:01:51.296 + [[ Fedora Linux == FreeBSD ]] 00:01:51.296 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.296 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.296 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.296 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.296 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.296 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.296 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.296 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.296 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.296 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.296 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.296 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.296 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.296 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.296 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.296 Test configuration: 00:01:51.296 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.296 SPDK_TEST_NVMF=1 00:01:51.296 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.296 SPDK_TEST_URING=1 00:01:51.296 SPDK_TEST_USDT=1 00:01:51.296 SPDK_RUN_UBSAN=1 00:01:51.296 NET_TYPE=virt 00:01:51.296 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.554 RUN_NIGHTLY=0 07:27:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:51.554 07:27:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.554 07:27:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.554 07:27:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.554 07:27:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.554 07:27:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.554 07:27:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.554 07:27:16 -- paths/export.sh@5 -- $ export PATH 00:01:51.554 07:27:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.554 07:27:16 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:51.554 07:27:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:51.554 07:27:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721978836.XXXXXX 00:01:51.554 07:27:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721978836.BuTNlB 00:01:51.554 07:27:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:51.554 07:27:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:51.554 07:27:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:51.554 07:27:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:51.554 07:27:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.554 07:27:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:51.554 07:27:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:51.554 07:27:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.554 07:27:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:51.554 07:27:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:51.554 07:27:16 -- pm/common@17 -- $ local monitor 00:01:51.554 07:27:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.554 07:27:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.554 07:27:16 -- pm/common@25 -- $ sleep 1 00:01:51.554 07:27:16 -- pm/common@21 -- $ date +%s 00:01:51.554 07:27:16 -- pm/common@21 -- $ date +%s 00:01:51.554 07:27:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721978836 00:01:51.554 07:27:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721978836 00:01:51.554 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721978836_collect-vmstat.pm.log 00:01:51.554 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721978836_collect-cpu-load.pm.log 00:01:52.513 07:27:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:52.513 07:27:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.513 07:27:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.513 07:27:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.513 07:27:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.513 Fri Jul 26 07:27:17 AM UTC 2024 00:01:52.513 07:27:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.513 v24.09-pre-323-g5c22a76d6 00:01:52.513 07:27:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:52.513 07:27:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.513 07:27:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.513 07:27:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.513 07:27:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.513 07:27:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.513 ************************************ 00:01:52.513 START TEST ubsan 00:01:52.513 ************************************ 00:01:52.513 using ubsan 00:01:52.513 07:27:17 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:52.513 00:01:52.513 real 0m0.000s 00:01:52.513 user 0m0.000s 00:01:52.513 sys 0m0.000s 00:01:52.513 07:27:17 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:52.513 07:27:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.513 ************************************ 00:01:52.513 END TEST ubsan 00:01:52.513 ************************************ 00:01:52.513 07:27:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:52.513 07:27:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:52.513 07:27:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:52.513 07:27:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:52.513 07:27:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:52.513 07:27:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:52.513 07:27:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:52.513 07:27:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:52.513 07:27:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:52.773 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:52.773 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:53.031 Using 'verbs' RDMA provider 00:02:08.841 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:21.040 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:21.040 Creating mk/config.mk...done. 00:02:21.040 Creating mk/cc.flags.mk...done. 00:02:21.040 Type 'make' to build. 00:02:21.040 07:27:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:21.040 07:27:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:21.040 07:27:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:21.040 07:27:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.040 ************************************ 00:02:21.040 START TEST make 00:02:21.040 ************************************ 00:02:21.040 07:27:45 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:21.040 make[1]: Nothing to be done for 'all'. 00:02:31.025 The Meson build system 00:02:31.025 Version: 1.3.1 00:02:31.025 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:31.025 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:31.025 Build type: native build 00:02:31.025 Program cat found: YES (/usr/bin/cat) 00:02:31.025 Project name: DPDK 00:02:31.025 Project version: 24.03.0 00:02:31.025 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.025 C linker for the host machine: cc ld.bfd 2.39-16 00:02:31.025 Host machine cpu family: x86_64 00:02:31.025 Host machine cpu: x86_64 00:02:31.025 Message: ## Building in Developer Mode ## 00:02:31.025 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.025 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:31.025 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.025 Program python3 found: YES (/usr/bin/python3) 00:02:31.025 Program cat found: YES (/usr/bin/cat) 00:02:31.025 Compiler for C supports arguments -march=native: YES 00:02:31.025 Checking for size of "void *" : 8 00:02:31.025 Checking for size of "void *" : 8 (cached) 00:02:31.025 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:31.025 Library m found: YES 00:02:31.025 Library numa found: YES 00:02:31.025 Has header "numaif.h" : YES 00:02:31.025 Library fdt found: NO 00:02:31.025 Library execinfo found: NO 00:02:31.025 Has header "execinfo.h" : YES 00:02:31.025 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.025 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.025 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.025 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.025 Run-time dependency openssl found: YES 3.0.9 00:02:31.025 Run-time dependency libpcap found: YES 1.10.4 00:02:31.025 Has header "pcap.h" with dependency libpcap: YES 00:02:31.025 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.025 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.025 Compiler for C supports arguments -Wformat: YES 00:02:31.025 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.025 Compiler for C supports arguments -Wformat-security: NO 00:02:31.025 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.025 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.025 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.025 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.025 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.025 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.025 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.025 Compiler for C supports arguments -Wundef: YES 00:02:31.025 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.025 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.025 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.025 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.025 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.025 Program objdump found: YES (/usr/bin/objdump) 00:02:31.025 Compiler for C supports arguments -mavx512f: YES 00:02:31.025 Checking if "AVX512 checking" compiles: YES 00:02:31.025 Fetching value of define "__SSE4_2__" : 1 00:02:31.025 Fetching value of define "__AES__" : 1 00:02:31.025 Fetching value of define "__AVX__" : 1 00:02:31.025 Fetching value of define "__AVX2__" : 1 00:02:31.025 Fetching value of define "__AVX512BW__" : (undefined) 00:02:31.025 Fetching value of define "__AVX512CD__" : (undefined) 00:02:31.025 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:31.025 Fetching value of define "__AVX512F__" : (undefined) 00:02:31.025 Fetching value of define "__AVX512VL__" : (undefined) 00:02:31.025 Fetching value of define "__PCLMUL__" : 1 00:02:31.025 Fetching value of define "__RDRND__" : 1 00:02:31.025 Fetching value of define "__RDSEED__" : 1 00:02:31.025 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.025 Fetching value of define "__znver1__" : (undefined) 00:02:31.025 Fetching value of define "__znver2__" : (undefined) 00:02:31.025 Fetching value of define "__znver3__" : (undefined) 00:02:31.025 Fetching value of define "__znver4__" : (undefined) 00:02:31.025 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.025 Message: lib/log: Defining dependency "log" 00:02:31.025 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.025 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.025 Checking for function "getentropy" : NO 00:02:31.025 Message: lib/eal: Defining dependency "eal" 00:02:31.025 Message: lib/ring: Defining dependency "ring" 00:02:31.025 Message: lib/rcu: Defining dependency "rcu" 00:02:31.025 Message: lib/mempool: Defining dependency "mempool" 00:02:31.025 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.025 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.025 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.025 Compiler for C supports arguments -mpclmul: YES 00:02:31.025 Compiler for C supports arguments -maes: YES 00:02:31.025 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.026 Compiler for C supports arguments -mavx512bw: YES 00:02:31.026 Compiler for C supports arguments -mavx512dq: YES 00:02:31.026 Compiler for C supports arguments -mavx512vl: YES 00:02:31.026 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.026 Compiler for C supports arguments -mavx2: YES 00:02:31.026 Compiler for C supports arguments -mavx: YES 00:02:31.026 Message: lib/net: Defining dependency "net" 00:02:31.026 Message: lib/meter: Defining dependency "meter" 00:02:31.026 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.026 Message: lib/pci: Defining dependency "pci" 00:02:31.026 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.026 Message: lib/hash: Defining dependency "hash" 00:02:31.026 Message: lib/timer: Defining dependency "timer" 00:02:31.026 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.026 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.026 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.026 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.026 Message: lib/power: Defining dependency "power" 00:02:31.026 Message: lib/reorder: Defining dependency "reorder" 00:02:31.026 Message: lib/security: Defining dependency "security" 00:02:31.026 Has header "linux/userfaultfd.h" : YES 00:02:31.026 Has header "linux/vduse.h" : YES 00:02:31.026 Message: lib/vhost: Defining dependency "vhost" 00:02:31.026 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:31.026 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:31.026 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.026 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.026 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:31.026 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:31.026 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:31.026 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:31.026 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:31.026 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:31.026 Program doxygen found: YES (/usr/bin/doxygen) 00:02:31.026 Configuring doxy-api-html.conf using configuration 00:02:31.026 Configuring doxy-api-man.conf using configuration 00:02:31.026 Program mandb found: YES (/usr/bin/mandb) 00:02:31.026 Program sphinx-build found: NO 00:02:31.026 Configuring rte_build_config.h using configuration 00:02:31.026 Message: 00:02:31.026 ================= 00:02:31.026 Applications Enabled 00:02:31.026 ================= 00:02:31.026 00:02:31.026 apps: 00:02:31.026 00:02:31.026 00:02:31.026 Message: 00:02:31.026 ================= 00:02:31.026 Libraries Enabled 00:02:31.026 ================= 00:02:31.026 00:02:31.026 libs: 00:02:31.026 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:31.026 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:31.026 cryptodev, dmadev, power, reorder, security, vhost, 00:02:31.026 00:02:31.026 Message: 00:02:31.026 =============== 00:02:31.026 Drivers Enabled 00:02:31.026 =============== 00:02:31.026 00:02:31.026 common: 00:02:31.026 00:02:31.026 bus: 00:02:31.026 pci, vdev, 00:02:31.026 mempool: 00:02:31.026 ring, 00:02:31.026 dma: 00:02:31.026 00:02:31.026 net: 00:02:31.026 00:02:31.026 crypto: 00:02:31.026 00:02:31.026 compress: 00:02:31.026 00:02:31.026 vdpa: 00:02:31.026 00:02:31.026 00:02:31.026 Message: 00:02:31.026 ================= 00:02:31.026 Content Skipped 00:02:31.026 ================= 00:02:31.026 00:02:31.026 apps: 00:02:31.026 dumpcap: explicitly disabled via build config 00:02:31.026 graph: explicitly disabled via build config 00:02:31.026 pdump: explicitly disabled via build config 00:02:31.026 proc-info: explicitly disabled via build config 00:02:31.026 test-acl: explicitly disabled via build config 00:02:31.026 test-bbdev: explicitly disabled via build config 00:02:31.026 test-cmdline: explicitly disabled via build config 00:02:31.026 test-compress-perf: explicitly disabled via build config 00:02:31.026 test-crypto-perf: explicitly disabled via build config 00:02:31.026 test-dma-perf: explicitly disabled via build config 00:02:31.026 test-eventdev: explicitly disabled via build config 00:02:31.026 test-fib: explicitly disabled via build config 00:02:31.026 test-flow-perf: explicitly disabled via build config 00:02:31.026 test-gpudev: explicitly disabled via build config 00:02:31.026 test-mldev: explicitly disabled via build config 00:02:31.026 test-pipeline: explicitly disabled via build config 00:02:31.026 test-pmd: explicitly disabled via build config 00:02:31.026 test-regex: explicitly disabled via build config 00:02:31.026 test-sad: explicitly disabled via build config 00:02:31.026 test-security-perf: explicitly disabled via build config 00:02:31.026 00:02:31.026 libs: 00:02:31.026 argparse: explicitly disabled via build config 00:02:31.026 metrics: explicitly disabled via build config 00:02:31.026 acl: explicitly disabled via build config 00:02:31.026 bbdev: explicitly disabled via build config 00:02:31.026 bitratestats: explicitly disabled via build config 00:02:31.026 bpf: explicitly disabled via build config 00:02:31.026 cfgfile: explicitly disabled via build config 00:02:31.026 distributor: explicitly disabled via build config 00:02:31.026 efd: explicitly disabled via build config 00:02:31.026 eventdev: explicitly disabled via build config 00:02:31.026 dispatcher: explicitly disabled via build config 00:02:31.026 gpudev: explicitly disabled via build config 00:02:31.026 gro: explicitly disabled via build config 00:02:31.026 gso: explicitly disabled via build config 00:02:31.026 ip_frag: explicitly disabled via build config 00:02:31.026 jobstats: explicitly disabled via build config 00:02:31.026 latencystats: explicitly disabled via build config 00:02:31.026 lpm: explicitly disabled via build config 00:02:31.026 member: explicitly disabled via build config 00:02:31.026 pcapng: explicitly disabled via build config 00:02:31.026 rawdev: explicitly disabled via build config 00:02:31.026 regexdev: explicitly disabled via build config 00:02:31.026 mldev: explicitly disabled via build config 00:02:31.026 rib: explicitly disabled via build config 00:02:31.026 sched: explicitly disabled via build config 00:02:31.026 stack: explicitly disabled via build config 00:02:31.026 ipsec: explicitly disabled via build config 00:02:31.026 pdcp: explicitly disabled via build config 00:02:31.026 fib: explicitly disabled via build config 00:02:31.026 port: explicitly disabled via build config 00:02:31.026 pdump: explicitly disabled via build config 00:02:31.026 table: explicitly disabled via build config 00:02:31.026 pipeline: explicitly disabled via build config 00:02:31.026 graph: explicitly disabled via build config 00:02:31.026 node: explicitly disabled via build config 00:02:31.026 00:02:31.026 drivers: 00:02:31.026 common/cpt: not in enabled drivers build config 00:02:31.026 common/dpaax: not in enabled drivers build config 00:02:31.026 common/iavf: not in enabled drivers build config 00:02:31.026 common/idpf: not in enabled drivers build config 00:02:31.026 common/ionic: not in enabled drivers build config 00:02:31.026 common/mvep: not in enabled drivers build config 00:02:31.026 common/octeontx: not in enabled drivers build config 00:02:31.026 bus/auxiliary: not in enabled drivers build config 00:02:31.026 bus/cdx: not in enabled drivers build config 00:02:31.026 bus/dpaa: not in enabled drivers build config 00:02:31.026 bus/fslmc: not in enabled drivers build config 00:02:31.026 bus/ifpga: not in enabled drivers build config 00:02:31.026 bus/platform: not in enabled drivers build config 00:02:31.026 bus/uacce: not in enabled drivers build config 00:02:31.026 bus/vmbus: not in enabled drivers build config 00:02:31.026 common/cnxk: not in enabled drivers build config 00:02:31.026 common/mlx5: not in enabled drivers build config 00:02:31.026 common/nfp: not in enabled drivers build config 00:02:31.026 common/nitrox: not in enabled drivers build config 00:02:31.026 common/qat: not in enabled drivers build config 00:02:31.026 common/sfc_efx: not in enabled drivers build config 00:02:31.026 mempool/bucket: not in enabled drivers build config 00:02:31.026 mempool/cnxk: not in enabled drivers build config 00:02:31.026 mempool/dpaa: not in enabled drivers build config 00:02:31.026 mempool/dpaa2: not in enabled drivers build config 00:02:31.026 mempool/octeontx: not in enabled drivers build config 00:02:31.026 mempool/stack: not in enabled drivers build config 00:02:31.027 dma/cnxk: not in enabled drivers build config 00:02:31.027 dma/dpaa: not in enabled drivers build config 00:02:31.027 dma/dpaa2: not in enabled drivers build config 00:02:31.027 dma/hisilicon: not in enabled drivers build config 00:02:31.027 dma/idxd: not in enabled drivers build config 00:02:31.027 dma/ioat: not in enabled drivers build config 00:02:31.027 dma/skeleton: not in enabled drivers build config 00:02:31.027 net/af_packet: not in enabled drivers build config 00:02:31.027 net/af_xdp: not in enabled drivers build config 00:02:31.027 net/ark: not in enabled drivers build config 00:02:31.027 net/atlantic: not in enabled drivers build config 00:02:31.027 net/avp: not in enabled drivers build config 00:02:31.027 net/axgbe: not in enabled drivers build config 00:02:31.027 net/bnx2x: not in enabled drivers build config 00:02:31.027 net/bnxt: not in enabled drivers build config 00:02:31.027 net/bonding: not in enabled drivers build config 00:02:31.027 net/cnxk: not in enabled drivers build config 00:02:31.027 net/cpfl: not in enabled drivers build config 00:02:31.027 net/cxgbe: not in enabled drivers build config 00:02:31.027 net/dpaa: not in enabled drivers build config 00:02:31.027 net/dpaa2: not in enabled drivers build config 00:02:31.027 net/e1000: not in enabled drivers build config 00:02:31.027 net/ena: not in enabled drivers build config 00:02:31.027 net/enetc: not in enabled drivers build config 00:02:31.027 net/enetfec: not in enabled drivers build config 00:02:31.027 net/enic: not in enabled drivers build config 00:02:31.027 net/failsafe: not in enabled drivers build config 00:02:31.027 net/fm10k: not in enabled drivers build config 00:02:31.027 net/gve: not in enabled drivers build config 00:02:31.027 net/hinic: not in enabled drivers build config 00:02:31.027 net/hns3: not in enabled drivers build config 00:02:31.027 net/i40e: not in enabled drivers build config 00:02:31.027 net/iavf: not in enabled drivers build config 00:02:31.027 net/ice: not in enabled drivers build config 00:02:31.027 net/idpf: not in enabled drivers build config 00:02:31.027 net/igc: not in enabled drivers build config 00:02:31.027 net/ionic: not in enabled drivers build config 00:02:31.027 net/ipn3ke: not in enabled drivers build config 00:02:31.027 net/ixgbe: not in enabled drivers build config 00:02:31.027 net/mana: not in enabled drivers build config 00:02:31.027 net/memif: not in enabled drivers build config 00:02:31.027 net/mlx4: not in enabled drivers build config 00:02:31.027 net/mlx5: not in enabled drivers build config 00:02:31.027 net/mvneta: not in enabled drivers build config 00:02:31.027 net/mvpp2: not in enabled drivers build config 00:02:31.027 net/netvsc: not in enabled drivers build config 00:02:31.027 net/nfb: not in enabled drivers build config 00:02:31.027 net/nfp: not in enabled drivers build config 00:02:31.027 net/ngbe: not in enabled drivers build config 00:02:31.027 net/null: not in enabled drivers build config 00:02:31.027 net/octeontx: not in enabled drivers build config 00:02:31.027 net/octeon_ep: not in enabled drivers build config 00:02:31.027 net/pcap: not in enabled drivers build config 00:02:31.027 net/pfe: not in enabled drivers build config 00:02:31.027 net/qede: not in enabled drivers build config 00:02:31.027 net/ring: not in enabled drivers build config 00:02:31.027 net/sfc: not in enabled drivers build config 00:02:31.027 net/softnic: not in enabled drivers build config 00:02:31.027 net/tap: not in enabled drivers build config 00:02:31.027 net/thunderx: not in enabled drivers build config 00:02:31.027 net/txgbe: not in enabled drivers build config 00:02:31.027 net/vdev_netvsc: not in enabled drivers build config 00:02:31.027 net/vhost: not in enabled drivers build config 00:02:31.027 net/virtio: not in enabled drivers build config 00:02:31.027 net/vmxnet3: not in enabled drivers build config 00:02:31.027 raw/*: missing internal dependency, "rawdev" 00:02:31.027 crypto/armv8: not in enabled drivers build config 00:02:31.027 crypto/bcmfs: not in enabled drivers build config 00:02:31.027 crypto/caam_jr: not in enabled drivers build config 00:02:31.027 crypto/ccp: not in enabled drivers build config 00:02:31.027 crypto/cnxk: not in enabled drivers build config 00:02:31.027 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.027 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.027 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.027 crypto/mlx5: not in enabled drivers build config 00:02:31.027 crypto/mvsam: not in enabled drivers build config 00:02:31.027 crypto/nitrox: not in enabled drivers build config 00:02:31.027 crypto/null: not in enabled drivers build config 00:02:31.027 crypto/octeontx: not in enabled drivers build config 00:02:31.027 crypto/openssl: not in enabled drivers build config 00:02:31.027 crypto/scheduler: not in enabled drivers build config 00:02:31.027 crypto/uadk: not in enabled drivers build config 00:02:31.027 crypto/virtio: not in enabled drivers build config 00:02:31.027 compress/isal: not in enabled drivers build config 00:02:31.027 compress/mlx5: not in enabled drivers build config 00:02:31.027 compress/nitrox: not in enabled drivers build config 00:02:31.027 compress/octeontx: not in enabled drivers build config 00:02:31.027 compress/zlib: not in enabled drivers build config 00:02:31.027 regex/*: missing internal dependency, "regexdev" 00:02:31.027 ml/*: missing internal dependency, "mldev" 00:02:31.027 vdpa/ifc: not in enabled drivers build config 00:02:31.027 vdpa/mlx5: not in enabled drivers build config 00:02:31.027 vdpa/nfp: not in enabled drivers build config 00:02:31.027 vdpa/sfc: not in enabled drivers build config 00:02:31.027 event/*: missing internal dependency, "eventdev" 00:02:31.027 baseband/*: missing internal dependency, "bbdev" 00:02:31.027 gpu/*: missing internal dependency, "gpudev" 00:02:31.027 00:02:31.027 00:02:31.027 Build targets in project: 85 00:02:31.027 00:02:31.027 DPDK 24.03.0 00:02:31.027 00:02:31.027 User defined options 00:02:31.027 buildtype : debug 00:02:31.027 default_library : shared 00:02:31.027 libdir : lib 00:02:31.027 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:31.027 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:31.027 c_link_args : 00:02:31.027 cpu_instruction_set: native 00:02:31.027 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:31.027 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:31.027 enable_docs : false 00:02:31.027 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:31.027 enable_kmods : false 00:02:31.027 max_lcores : 128 00:02:31.027 tests : false 00:02:31.027 00:02:31.027 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.027 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:31.027 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:31.027 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:31.027 [3/268] Linking static target lib/librte_kvargs.a 00:02:31.027 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:31.027 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:31.027 [6/268] Linking static target lib/librte_log.a 00:02:31.285 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.285 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:31.285 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:31.285 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:31.543 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:31.543 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:31.543 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:31.543 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:31.543 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.543 [16/268] Linking target lib/librte_log.so.24.1 00:02:31.801 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:31.801 [18/268] Linking static target lib/librte_telemetry.a 00:02:31.801 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:31.801 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:32.060 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:32.060 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:32.060 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:32.318 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:32.318 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:32.318 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:32.318 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:32.576 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:32.576 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:32.576 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.576 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:32.576 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:32.576 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:32.835 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:32.835 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:32.835 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:33.092 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:33.092 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.350 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:33.350 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.350 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:33.350 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:33.350 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:33.350 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:33.608 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:33.608 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:33.867 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:33.867 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:33.867 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:34.125 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.125 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:34.125 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:34.383 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:34.383 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:34.383 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:34.383 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:34.642 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:34.642 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:34.899 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:34.899 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.899 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.156 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.156 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.414 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.414 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.414 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.414 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.414 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.671 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.929 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.929 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:35.929 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.187 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.187 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.187 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.187 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:36.187 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:36.187 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.445 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.445 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.445 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:36.703 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:36.962 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:36.962 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:36.962 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.220 [86/268] Linking static target lib/librte_eal.a 00:02:37.220 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.220 [88/268] Linking static target lib/librte_ring.a 00:02:37.479 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.479 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.479 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.479 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.479 [93/268] Linking static target lib/librte_rcu.a 00:02:37.737 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.737 [95/268] Linking static target lib/librte_mempool.a 00:02:37.737 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.996 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.996 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.996 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:37.996 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.255 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.255 [102/268] Linking static target lib/librte_mbuf.a 00:02:38.255 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.514 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.772 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.772 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.772 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.772 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.772 [109/268] Linking static target lib/librte_net.a 00:02:38.772 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.031 [111/268] Linking static target lib/librte_meter.a 00:02:39.031 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.288 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.288 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.546 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.546 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.546 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.856 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.856 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.422 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.680 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.680 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.680 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.680 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.680 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.937 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.938 [127/268] Linking static target lib/librte_pci.a 00:02:40.938 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.938 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.938 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.938 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.938 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.195 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.195 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.195 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:41.195 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.195 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.195 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:41.452 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.452 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.452 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.452 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.452 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.452 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:41.452 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.710 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.710 [147/268] Linking static target lib/librte_cmdline.a 00:02:41.710 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:41.710 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.710 [150/268] Linking static target lib/librte_ethdev.a 00:02:41.967 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:41.967 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.225 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.225 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.225 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.225 [156/268] Linking static target lib/librte_timer.a 00:02:42.225 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.790 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.790 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.790 [160/268] Linking static target lib/librte_hash.a 00:02:42.790 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:42.790 [162/268] Linking static target lib/librte_compressdev.a 00:02:42.790 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:42.790 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.047 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.048 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.305 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.305 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.305 [169/268] Linking static target lib/librte_dmadev.a 00:02:43.305 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.563 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.563 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:43.563 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.563 [174/268] Linking static target lib/librte_cryptodev.a 00:02:43.563 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:43.822 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:43.822 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.080 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.080 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.080 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.080 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.338 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.338 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.338 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.597 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.597 [186/268] Linking static target lib/librte_power.a 00:02:44.854 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.854 [188/268] Linking static target lib/librte_reorder.a 00:02:44.854 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.854 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.854 [191/268] Linking static target lib/librte_security.a 00:02:45.419 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.420 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.420 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.420 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.678 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.678 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.935 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.193 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.193 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.450 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.708 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.708 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.708 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:46.708 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.708 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.966 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.966 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.966 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.966 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.224 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.224 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:47.224 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.224 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.224 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.224 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:47.482 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.482 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.482 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.482 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:47.482 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.482 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.741 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.741 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.741 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.741 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:47.741 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.741 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.672 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.672 [230/268] Linking static target lib/librte_vhost.a 00:02:49.239 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.239 [232/268] Linking target lib/librte_eal.so.24.1 00:02:49.497 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:49.497 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:49.497 [235/268] Linking target lib/librte_ring.so.24.1 00:02:49.497 [236/268] Linking target lib/librte_timer.so.24.1 00:02:49.497 [237/268] Linking target lib/librte_meter.so.24.1 00:02:49.497 [238/268] Linking target lib/librte_pci.so.24.1 00:02:49.497 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:49.497 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:49.497 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:49.497 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:49.497 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:49.497 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:49.754 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:49.754 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:49.754 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:49.754 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:49.754 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:49.754 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:49.754 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:50.012 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.012 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:50.012 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:50.012 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:50.012 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:50.012 [257/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.012 [258/268] Linking target lib/librte_net.so.24.1 00:02:50.271 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:50.271 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:50.271 [261/268] Linking target lib/librte_hash.so.24.1 00:02:50.271 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:50.271 [263/268] Linking target lib/librte_security.so.24.1 00:02:50.271 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:50.530 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:50.530 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:50.530 [267/268] Linking target lib/librte_power.so.24.1 00:02:50.530 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:50.530 INFO: autodetecting backend as ninja 00:02:50.530 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.905 CC lib/ut_mock/mock.o 00:02:51.905 CC lib/ut/ut.o 00:02:51.905 CC lib/log/log.o 00:02:51.905 CC lib/log/log_flags.o 00:02:51.905 CC lib/log/log_deprecated.o 00:02:51.905 LIB libspdk_ut_mock.a 00:02:51.905 LIB libspdk_ut.a 00:02:51.905 LIB libspdk_log.a 00:02:51.905 SO libspdk_ut_mock.so.6.0 00:02:51.905 SO libspdk_ut.so.2.0 00:02:51.905 SO libspdk_log.so.7.0 00:02:51.905 SYMLINK libspdk_ut_mock.so 00:02:51.905 SYMLINK libspdk_ut.so 00:02:52.163 SYMLINK libspdk_log.so 00:02:52.163 CC lib/util/base64.o 00:02:52.163 CC lib/util/bit_array.o 00:02:52.163 CC lib/util/cpuset.o 00:02:52.163 CC lib/util/crc16.o 00:02:52.163 CC lib/util/crc32.o 00:02:52.163 CC lib/ioat/ioat.o 00:02:52.163 CC lib/util/crc32c.o 00:02:52.163 CXX lib/trace_parser/trace.o 00:02:52.163 CC lib/dma/dma.o 00:02:52.421 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.421 CC lib/util/crc32_ieee.o 00:02:52.421 CC lib/vfio_user/host/vfio_user.o 00:02:52.421 CC lib/util/crc64.o 00:02:52.421 CC lib/util/dif.o 00:02:52.421 LIB libspdk_dma.a 00:02:52.421 CC lib/util/fd.o 00:02:52.421 SO libspdk_dma.so.4.0 00:02:52.679 CC lib/util/fd_group.o 00:02:52.679 CC lib/util/file.o 00:02:52.679 SYMLINK libspdk_dma.so 00:02:52.679 LIB libspdk_ioat.a 00:02:52.679 CC lib/util/hexlify.o 00:02:52.679 CC lib/util/iov.o 00:02:52.679 SO libspdk_ioat.so.7.0 00:02:52.679 CC lib/util/math.o 00:02:52.679 CC lib/util/net.o 00:02:52.679 LIB libspdk_vfio_user.a 00:02:52.679 SYMLINK libspdk_ioat.so 00:02:52.679 CC lib/util/pipe.o 00:02:52.679 SO libspdk_vfio_user.so.5.0 00:02:52.679 CC lib/util/strerror_tls.o 00:02:52.679 CC lib/util/string.o 00:02:52.679 SYMLINK libspdk_vfio_user.so 00:02:52.679 CC lib/util/uuid.o 00:02:52.679 CC lib/util/xor.o 00:02:52.938 CC lib/util/zipf.o 00:02:52.938 LIB libspdk_util.a 00:02:53.196 SO libspdk_util.so.10.0 00:02:53.196 SYMLINK libspdk_util.so 00:02:53.196 LIB libspdk_trace_parser.a 00:02:53.454 SO libspdk_trace_parser.so.5.0 00:02:53.454 SYMLINK libspdk_trace_parser.so 00:02:53.454 CC lib/json/json_parse.o 00:02:53.454 CC lib/json/json_write.o 00:02:53.454 CC lib/rdma_utils/rdma_utils.o 00:02:53.454 CC lib/json/json_util.o 00:02:53.454 CC lib/idxd/idxd.o 00:02:53.454 CC lib/idxd/idxd_user.o 00:02:53.454 CC lib/conf/conf.o 00:02:53.454 CC lib/vmd/vmd.o 00:02:53.454 CC lib/rdma_provider/common.o 00:02:53.454 CC lib/env_dpdk/env.o 00:02:53.712 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.712 LIB libspdk_conf.a 00:02:53.712 CC lib/env_dpdk/memory.o 00:02:53.712 CC lib/env_dpdk/pci.o 00:02:53.712 SO libspdk_conf.so.6.0 00:02:53.712 LIB libspdk_rdma_utils.a 00:02:53.712 LIB libspdk_json.a 00:02:53.712 CC lib/env_dpdk/init.o 00:02:53.712 SO libspdk_rdma_utils.so.1.0 00:02:53.712 SYMLINK libspdk_conf.so 00:02:53.712 CC lib/env_dpdk/threads.o 00:02:53.712 SO libspdk_json.so.6.0 00:02:53.971 SYMLINK libspdk_rdma_utils.so 00:02:53.971 CC lib/env_dpdk/pci_ioat.o 00:02:53.971 LIB libspdk_rdma_provider.a 00:02:53.971 SYMLINK libspdk_json.so 00:02:53.971 CC lib/env_dpdk/pci_virtio.o 00:02:53.971 SO libspdk_rdma_provider.so.6.0 00:02:53.971 CC lib/env_dpdk/pci_vmd.o 00:02:53.971 SYMLINK libspdk_rdma_provider.so 00:02:53.971 CC lib/env_dpdk/pci_idxd.o 00:02:53.971 CC lib/env_dpdk/pci_event.o 00:02:53.971 CC lib/idxd/idxd_kernel.o 00:02:53.971 CC lib/env_dpdk/sigbus_handler.o 00:02:53.971 CC lib/env_dpdk/pci_dpdk.o 00:02:54.229 CC lib/vmd/led.o 00:02:54.229 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.229 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.229 LIB libspdk_idxd.a 00:02:54.229 SO libspdk_idxd.so.12.0 00:02:54.229 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.229 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.229 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.229 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.229 LIB libspdk_vmd.a 00:02:54.229 SYMLINK libspdk_idxd.so 00:02:54.229 SO libspdk_vmd.so.6.0 00:02:54.487 SYMLINK libspdk_vmd.so 00:02:54.487 LIB libspdk_jsonrpc.a 00:02:54.487 SO libspdk_jsonrpc.so.6.0 00:02:54.746 SYMLINK libspdk_jsonrpc.so 00:02:55.005 LIB libspdk_env_dpdk.a 00:02:55.005 CC lib/rpc/rpc.o 00:02:55.005 SO libspdk_env_dpdk.so.15.0 00:02:55.263 LIB libspdk_rpc.a 00:02:55.263 SYMLINK libspdk_env_dpdk.so 00:02:55.263 SO libspdk_rpc.so.6.0 00:02:55.263 SYMLINK libspdk_rpc.so 00:02:55.522 CC lib/trace/trace.o 00:02:55.522 CC lib/trace/trace_flags.o 00:02:55.522 CC lib/keyring/keyring.o 00:02:55.522 CC lib/trace/trace_rpc.o 00:02:55.522 CC lib/keyring/keyring_rpc.o 00:02:55.522 CC lib/notify/notify.o 00:02:55.522 CC lib/notify/notify_rpc.o 00:02:55.781 LIB libspdk_notify.a 00:02:55.781 LIB libspdk_trace.a 00:02:55.781 SO libspdk_notify.so.6.0 00:02:55.781 LIB libspdk_keyring.a 00:02:55.781 SO libspdk_trace.so.10.0 00:02:55.781 SYMLINK libspdk_notify.so 00:02:55.781 SO libspdk_keyring.so.1.0 00:02:55.781 SYMLINK libspdk_trace.so 00:02:56.040 SYMLINK libspdk_keyring.so 00:02:56.040 CC lib/sock/sock.o 00:02:56.040 CC lib/thread/thread.o 00:02:56.040 CC lib/sock/sock_rpc.o 00:02:56.040 CC lib/thread/iobuf.o 00:02:56.608 LIB libspdk_sock.a 00:02:56.608 SO libspdk_sock.so.10.0 00:02:56.608 SYMLINK libspdk_sock.so 00:02:56.867 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.867 CC lib/nvme/nvme_ctrlr.o 00:02:56.867 CC lib/nvme/nvme_ns_cmd.o 00:02:56.867 CC lib/nvme/nvme_ns.o 00:02:56.867 CC lib/nvme/nvme_fabric.o 00:02:57.126 CC lib/nvme/nvme_pcie_common.o 00:02:57.126 CC lib/nvme/nvme_qpair.o 00:02:57.126 CC lib/nvme/nvme.o 00:02:57.126 CC lib/nvme/nvme_pcie.o 00:02:57.694 LIB libspdk_thread.a 00:02:57.694 SO libspdk_thread.so.10.1 00:02:57.952 SYMLINK libspdk_thread.so 00:02:57.952 CC lib/nvme/nvme_quirks.o 00:02:57.952 CC lib/nvme/nvme_transport.o 00:02:57.952 CC lib/nvme/nvme_discovery.o 00:02:57.952 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.952 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.952 CC lib/nvme/nvme_tcp.o 00:02:57.952 CC lib/accel/accel.o 00:02:57.952 CC lib/nvme/nvme_opal.o 00:02:58.210 CC lib/nvme/nvme_io_msg.o 00:02:58.469 CC lib/accel/accel_rpc.o 00:02:58.469 CC lib/accel/accel_sw.o 00:02:58.469 CC lib/nvme/nvme_poll_group.o 00:02:58.469 CC lib/nvme/nvme_zns.o 00:02:58.729 CC lib/blob/blobstore.o 00:02:58.729 CC lib/blob/request.o 00:02:58.729 CC lib/blob/zeroes.o 00:02:58.729 CC lib/nvme/nvme_stubs.o 00:02:58.729 CC lib/blob/blob_bs_dev.o 00:02:58.987 LIB libspdk_accel.a 00:02:58.987 CC lib/init/json_config.o 00:02:58.987 SO libspdk_accel.so.16.0 00:02:58.987 CC lib/nvme/nvme_auth.o 00:02:59.246 SYMLINK libspdk_accel.so 00:02:59.246 CC lib/virtio/virtio.o 00:02:59.246 CC lib/nvme/nvme_cuse.o 00:02:59.246 CC lib/nvme/nvme_rdma.o 00:02:59.246 CC lib/init/subsystem.o 00:02:59.246 CC lib/init/subsystem_rpc.o 00:02:59.246 CC lib/bdev/bdev.o 00:02:59.505 CC lib/init/rpc.o 00:02:59.505 CC lib/bdev/bdev_rpc.o 00:02:59.505 CC lib/bdev/bdev_zone.o 00:02:59.505 CC lib/virtio/virtio_vhost_user.o 00:02:59.505 CC lib/virtio/virtio_vfio_user.o 00:02:59.764 LIB libspdk_init.a 00:02:59.764 SO libspdk_init.so.5.0 00:02:59.764 CC lib/bdev/part.o 00:02:59.764 SYMLINK libspdk_init.so 00:02:59.764 CC lib/virtio/virtio_pci.o 00:02:59.764 CC lib/bdev/scsi_nvme.o 00:03:00.024 CC lib/event/app.o 00:03:00.024 CC lib/event/reactor.o 00:03:00.024 CC lib/event/log_rpc.o 00:03:00.024 CC lib/event/app_rpc.o 00:03:00.024 LIB libspdk_virtio.a 00:03:00.283 SO libspdk_virtio.so.7.0 00:03:00.283 CC lib/event/scheduler_static.o 00:03:00.283 SYMLINK libspdk_virtio.so 00:03:00.542 LIB libspdk_event.a 00:03:00.542 SO libspdk_event.so.14.0 00:03:00.542 SYMLINK libspdk_event.so 00:03:00.542 LIB libspdk_nvme.a 00:03:00.801 SO libspdk_nvme.so.13.1 00:03:01.061 SYMLINK libspdk_nvme.so 00:03:01.671 LIB libspdk_blob.a 00:03:01.671 SO libspdk_blob.so.11.0 00:03:01.929 SYMLINK libspdk_blob.so 00:03:01.929 LIB libspdk_bdev.a 00:03:02.187 CC lib/blobfs/blobfs.o 00:03:02.187 CC lib/blobfs/tree.o 00:03:02.187 SO libspdk_bdev.so.16.0 00:03:02.187 CC lib/lvol/lvol.o 00:03:02.187 SYMLINK libspdk_bdev.so 00:03:02.445 CC lib/ftl/ftl_core.o 00:03:02.445 CC lib/ftl/ftl_init.o 00:03:02.445 CC lib/ftl/ftl_debug.o 00:03:02.445 CC lib/nvmf/ctrlr.o 00:03:02.445 CC lib/ftl/ftl_layout.o 00:03:02.445 CC lib/ublk/ublk.o 00:03:02.445 CC lib/nbd/nbd.o 00:03:02.445 CC lib/scsi/dev.o 00:03:02.703 CC lib/ftl/ftl_io.o 00:03:02.703 CC lib/nvmf/ctrlr_discovery.o 00:03:02.703 CC lib/scsi/lun.o 00:03:02.703 CC lib/scsi/port.o 00:03:02.703 CC lib/scsi/scsi.o 00:03:02.965 CC lib/nbd/nbd_rpc.o 00:03:02.965 CC lib/ftl/ftl_sb.o 00:03:02.965 LIB libspdk_blobfs.a 00:03:02.965 CC lib/scsi/scsi_bdev.o 00:03:02.965 SO libspdk_blobfs.so.10.0 00:03:02.965 CC lib/ublk/ublk_rpc.o 00:03:02.965 CC lib/ftl/ftl_l2p.o 00:03:02.965 CC lib/ftl/ftl_l2p_flat.o 00:03:02.965 SYMLINK libspdk_blobfs.so 00:03:02.965 CC lib/ftl/ftl_nv_cache.o 00:03:02.965 LIB libspdk_nbd.a 00:03:02.965 LIB libspdk_lvol.a 00:03:02.965 SO libspdk_nbd.so.7.0 00:03:02.966 SO libspdk_lvol.so.10.0 00:03:03.233 CC lib/ftl/ftl_band.o 00:03:03.233 SYMLINK libspdk_lvol.so 00:03:03.233 SYMLINK libspdk_nbd.so 00:03:03.233 CC lib/scsi/scsi_pr.o 00:03:03.233 CC lib/ftl/ftl_band_ops.o 00:03:03.233 CC lib/ftl/ftl_writer.o 00:03:03.233 LIB libspdk_ublk.a 00:03:03.233 SO libspdk_ublk.so.3.0 00:03:03.233 CC lib/scsi/scsi_rpc.o 00:03:03.233 CC lib/nvmf/ctrlr_bdev.o 00:03:03.233 SYMLINK libspdk_ublk.so 00:03:03.233 CC lib/ftl/ftl_rq.o 00:03:03.491 CC lib/nvmf/subsystem.o 00:03:03.491 CC lib/nvmf/nvmf.o 00:03:03.491 CC lib/nvmf/nvmf_rpc.o 00:03:03.491 CC lib/nvmf/transport.o 00:03:03.491 CC lib/nvmf/tcp.o 00:03:03.491 CC lib/scsi/task.o 00:03:03.491 CC lib/ftl/ftl_reloc.o 00:03:03.750 LIB libspdk_scsi.a 00:03:03.750 SO libspdk_scsi.so.9.0 00:03:03.750 CC lib/ftl/ftl_l2p_cache.o 00:03:04.008 SYMLINK libspdk_scsi.so 00:03:04.008 CC lib/ftl/ftl_p2l.o 00:03:04.008 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.008 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.008 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.267 CC lib/nvmf/stubs.o 00:03:04.267 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.267 CC lib/nvmf/mdns_server.o 00:03:04.267 CC lib/nvmf/rdma.o 00:03:04.267 CC lib/nvmf/auth.o 00:03:04.267 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.267 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.525 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:04.525 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:04.525 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:04.525 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.525 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.525 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.784 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.784 CC lib/iscsi/conn.o 00:03:04.784 CC lib/vhost/vhost.o 00:03:04.784 CC lib/iscsi/init_grp.o 00:03:04.784 CC lib/ftl/utils/ftl_conf.o 00:03:04.784 CC lib/ftl/utils/ftl_md.o 00:03:05.042 CC lib/vhost/vhost_rpc.o 00:03:05.042 CC lib/vhost/vhost_scsi.o 00:03:05.042 CC lib/vhost/vhost_blk.o 00:03:05.042 CC lib/ftl/utils/ftl_mempool.o 00:03:05.042 CC lib/iscsi/iscsi.o 00:03:05.042 CC lib/iscsi/md5.o 00:03:05.300 CC lib/vhost/rte_vhost_user.o 00:03:05.300 CC lib/iscsi/param.o 00:03:05.300 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.300 CC lib/iscsi/portal_grp.o 00:03:05.558 CC lib/ftl/utils/ftl_property.o 00:03:05.558 CC lib/iscsi/tgt_node.o 00:03:05.558 CC lib/iscsi/iscsi_subsystem.o 00:03:05.558 CC lib/iscsi/iscsi_rpc.o 00:03:05.558 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.816 CC lib/iscsi/task.o 00:03:05.816 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.075 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.075 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.075 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.075 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.075 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.075 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.075 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.075 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.333 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.333 CC lib/ftl/base/ftl_base_dev.o 00:03:06.333 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.333 LIB libspdk_vhost.a 00:03:06.333 CC lib/ftl/ftl_trace.o 00:03:06.333 LIB libspdk_nvmf.a 00:03:06.333 SO libspdk_vhost.so.8.0 00:03:06.333 LIB libspdk_iscsi.a 00:03:06.591 SO libspdk_nvmf.so.19.0 00:03:06.591 SYMLINK libspdk_vhost.so 00:03:06.591 SO libspdk_iscsi.so.8.0 00:03:06.591 LIB libspdk_ftl.a 00:03:06.849 SYMLINK libspdk_iscsi.so 00:03:06.849 SYMLINK libspdk_nvmf.so 00:03:06.849 SO libspdk_ftl.so.9.0 00:03:07.414 SYMLINK libspdk_ftl.so 00:03:07.672 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.672 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.672 CC module/accel/ioat/accel_ioat.o 00:03:07.672 CC module/keyring/linux/keyring.o 00:03:07.672 CC module/sock/posix/posix.o 00:03:07.672 CC module/keyring/file/keyring.o 00:03:07.672 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.672 CC module/accel/error/accel_error.o 00:03:07.672 CC module/blob/bdev/blob_bdev.o 00:03:07.672 CC module/accel/dsa/accel_dsa.o 00:03:07.672 LIB libspdk_env_dpdk_rpc.a 00:03:07.672 SO libspdk_env_dpdk_rpc.so.6.0 00:03:07.929 SYMLINK libspdk_env_dpdk_rpc.so 00:03:07.929 CC module/keyring/linux/keyring_rpc.o 00:03:07.929 CC module/keyring/file/keyring_rpc.o 00:03:07.929 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.929 CC module/accel/ioat/accel_ioat_rpc.o 00:03:07.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.929 LIB libspdk_scheduler_dynamic.a 00:03:07.929 CC module/accel/error/accel_error_rpc.o 00:03:07.929 SO libspdk_scheduler_dynamic.so.4.0 00:03:07.929 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.929 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.929 LIB libspdk_keyring_linux.a 00:03:07.929 LIB libspdk_keyring_file.a 00:03:07.929 LIB libspdk_blob_bdev.a 00:03:07.929 SYMLINK libspdk_scheduler_dynamic.so 00:03:07.929 SO libspdk_blob_bdev.so.11.0 00:03:07.929 SO libspdk_keyring_linux.so.1.0 00:03:07.929 SO libspdk_keyring_file.so.1.0 00:03:07.929 CC module/sock/uring/uring.o 00:03:07.929 LIB libspdk_accel_ioat.a 00:03:07.929 LIB libspdk_accel_error.a 00:03:07.929 SO libspdk_accel_ioat.so.6.0 00:03:08.187 SYMLINK libspdk_blob_bdev.so 00:03:08.187 SYMLINK libspdk_keyring_file.so 00:03:08.187 SYMLINK libspdk_keyring_linux.so 00:03:08.187 SO libspdk_accel_error.so.2.0 00:03:08.187 LIB libspdk_accel_dsa.a 00:03:08.187 SYMLINK libspdk_accel_ioat.so 00:03:08.187 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.187 SO libspdk_accel_dsa.so.5.0 00:03:08.187 SYMLINK libspdk_accel_error.so 00:03:08.187 CC module/accel/iaa/accel_iaa.o 00:03:08.187 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.187 SYMLINK libspdk_accel_dsa.so 00:03:08.187 LIB libspdk_scheduler_gscheduler.a 00:03:08.448 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.448 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.448 CC module/bdev/gpt/gpt.o 00:03:08.448 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.448 CC module/bdev/delay/vbdev_delay.o 00:03:08.448 CC module/bdev/error/vbdev_error.o 00:03:08.448 LIB libspdk_accel_iaa.a 00:03:08.448 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.448 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.448 LIB libspdk_sock_posix.a 00:03:08.448 SO libspdk_accel_iaa.so.3.0 00:03:08.448 SO libspdk_sock_posix.so.6.0 00:03:08.448 CC module/bdev/malloc/bdev_malloc.o 00:03:08.448 SYMLINK libspdk_accel_iaa.so 00:03:08.448 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.448 SYMLINK libspdk_sock_posix.so 00:03:08.448 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.448 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.705 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.705 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.706 LIB libspdk_sock_uring.a 00:03:08.706 SO libspdk_sock_uring.so.5.0 00:03:08.706 LIB libspdk_blobfs_bdev.a 00:03:08.706 LIB libspdk_bdev_delay.a 00:03:08.706 SYMLINK libspdk_sock_uring.so 00:03:08.706 SO libspdk_blobfs_bdev.so.6.0 00:03:08.706 LIB libspdk_bdev_error.a 00:03:08.706 SO libspdk_bdev_delay.so.6.0 00:03:08.963 LIB libspdk_bdev_gpt.a 00:03:08.963 SO libspdk_bdev_error.so.6.0 00:03:08.963 LIB libspdk_bdev_malloc.a 00:03:08.963 SYMLINK libspdk_blobfs_bdev.so 00:03:08.963 CC module/bdev/null/bdev_null.o 00:03:08.963 CC module/bdev/null/bdev_null_rpc.o 00:03:08.963 SYMLINK libspdk_bdev_delay.so 00:03:08.963 SO libspdk_bdev_gpt.so.6.0 00:03:08.963 SO libspdk_bdev_malloc.so.6.0 00:03:08.963 SYMLINK libspdk_bdev_error.so 00:03:08.963 LIB libspdk_bdev_lvol.a 00:03:08.963 SYMLINK libspdk_bdev_gpt.so 00:03:08.963 SYMLINK libspdk_bdev_malloc.so 00:03:08.963 SO libspdk_bdev_lvol.so.6.0 00:03:08.963 CC module/bdev/nvme/bdev_nvme.o 00:03:08.963 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.963 SYMLINK libspdk_bdev_lvol.so 00:03:09.222 CC module/bdev/raid/bdev_raid.o 00:03:09.222 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.222 CC module/bdev/split/vbdev_split.o 00:03:09.222 LIB libspdk_bdev_null.a 00:03:09.222 CC module/bdev/uring/bdev_uring.o 00:03:09.222 CC module/bdev/aio/bdev_aio.o 00:03:09.222 SO libspdk_bdev_null.so.6.0 00:03:09.222 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.222 CC module/bdev/ftl/bdev_ftl.o 00:03:09.222 SYMLINK libspdk_bdev_null.so 00:03:09.222 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.222 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.480 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.480 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.480 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.480 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.480 LIB libspdk_bdev_passthru.a 00:03:09.480 LIB libspdk_bdev_ftl.a 00:03:09.480 LIB libspdk_bdev_split.a 00:03:09.480 CC module/bdev/uring/bdev_uring_rpc.o 00:03:09.480 SO libspdk_bdev_ftl.so.6.0 00:03:09.480 SO libspdk_bdev_passthru.so.6.0 00:03:09.480 SO libspdk_bdev_split.so.6.0 00:03:09.738 LIB libspdk_bdev_zone_block.a 00:03:09.738 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.738 SYMLINK libspdk_bdev_passthru.so 00:03:09.738 SYMLINK libspdk_bdev_ftl.so 00:03:09.738 SYMLINK libspdk_bdev_split.so 00:03:09.738 CC module/bdev/nvme/nvme_rpc.o 00:03:09.738 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.738 SO libspdk_bdev_zone_block.so.6.0 00:03:09.738 LIB libspdk_bdev_aio.a 00:03:09.738 LIB libspdk_bdev_uring.a 00:03:09.738 SYMLINK libspdk_bdev_zone_block.so 00:03:09.738 SO libspdk_bdev_aio.so.6.0 00:03:09.738 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.738 SO libspdk_bdev_uring.so.6.0 00:03:09.738 LIB libspdk_bdev_iscsi.a 00:03:09.738 SYMLINK libspdk_bdev_aio.so 00:03:09.738 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.738 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.996 SO libspdk_bdev_iscsi.so.6.0 00:03:09.996 SYMLINK libspdk_bdev_uring.so 00:03:09.996 CC module/bdev/raid/raid0.o 00:03:09.996 CC module/bdev/nvme/vbdev_opal.o 00:03:09.996 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.996 SYMLINK libspdk_bdev_iscsi.so 00:03:09.996 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.996 CC module/bdev/raid/raid1.o 00:03:09.996 CC module/bdev/raid/concat.o 00:03:09.996 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.996 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.254 LIB libspdk_bdev_raid.a 00:03:10.511 SO libspdk_bdev_raid.so.6.0 00:03:10.511 LIB libspdk_bdev_virtio.a 00:03:10.511 SO libspdk_bdev_virtio.so.6.0 00:03:10.511 SYMLINK libspdk_bdev_raid.so 00:03:10.511 SYMLINK libspdk_bdev_virtio.so 00:03:11.446 LIB libspdk_bdev_nvme.a 00:03:11.446 SO libspdk_bdev_nvme.so.7.0 00:03:11.446 SYMLINK libspdk_bdev_nvme.so 00:03:12.011 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.011 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.011 CC module/event/subsystems/vmd/vmd.o 00:03:12.011 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.011 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.011 CC module/event/subsystems/keyring/keyring.o 00:03:12.011 CC module/event/subsystems/sock/sock.o 00:03:12.011 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.011 LIB libspdk_event_keyring.a 00:03:12.011 LIB libspdk_event_scheduler.a 00:03:12.011 LIB libspdk_event_vhost_blk.a 00:03:12.011 LIB libspdk_event_vmd.a 00:03:12.011 LIB libspdk_event_sock.a 00:03:12.011 LIB libspdk_event_iobuf.a 00:03:12.011 SO libspdk_event_scheduler.so.4.0 00:03:12.011 SO libspdk_event_vhost_blk.so.3.0 00:03:12.011 SO libspdk_event_keyring.so.1.0 00:03:12.011 SO libspdk_event_sock.so.5.0 00:03:12.011 SO libspdk_event_vmd.so.6.0 00:03:12.011 SO libspdk_event_iobuf.so.3.0 00:03:12.269 SYMLINK libspdk_event_scheduler.so 00:03:12.269 SYMLINK libspdk_event_vhost_blk.so 00:03:12.269 SYMLINK libspdk_event_keyring.so 00:03:12.269 SYMLINK libspdk_event_vmd.so 00:03:12.269 SYMLINK libspdk_event_iobuf.so 00:03:12.269 SYMLINK libspdk_event_sock.so 00:03:12.528 CC module/event/subsystems/accel/accel.o 00:03:12.528 LIB libspdk_event_accel.a 00:03:12.786 SO libspdk_event_accel.so.6.0 00:03:12.786 SYMLINK libspdk_event_accel.so 00:03:13.045 CC module/event/subsystems/bdev/bdev.o 00:03:13.302 LIB libspdk_event_bdev.a 00:03:13.302 SO libspdk_event_bdev.so.6.0 00:03:13.302 SYMLINK libspdk_event_bdev.so 00:03:13.560 CC module/event/subsystems/nbd/nbd.o 00:03:13.560 CC module/event/subsystems/ublk/ublk.o 00:03:13.560 CC module/event/subsystems/scsi/scsi.o 00:03:13.560 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.560 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.818 LIB libspdk_event_ublk.a 00:03:13.818 LIB libspdk_event_nbd.a 00:03:13.818 LIB libspdk_event_scsi.a 00:03:13.818 SO libspdk_event_nbd.so.6.0 00:03:13.818 SO libspdk_event_ublk.so.3.0 00:03:13.818 SO libspdk_event_scsi.so.6.0 00:03:13.818 SYMLINK libspdk_event_nbd.so 00:03:13.818 SYMLINK libspdk_event_ublk.so 00:03:13.818 LIB libspdk_event_nvmf.a 00:03:13.818 SYMLINK libspdk_event_scsi.so 00:03:13.818 SO libspdk_event_nvmf.so.6.0 00:03:14.076 SYMLINK libspdk_event_nvmf.so 00:03:14.076 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.076 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.334 LIB libspdk_event_vhost_scsi.a 00:03:14.334 LIB libspdk_event_iscsi.a 00:03:14.334 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.334 SO libspdk_event_iscsi.so.6.0 00:03:14.334 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.334 SYMLINK libspdk_event_iscsi.so 00:03:14.592 SO libspdk.so.6.0 00:03:14.593 SYMLINK libspdk.so 00:03:14.868 CC app/trace_record/trace_record.o 00:03:14.868 CXX app/trace/trace.o 00:03:14.868 TEST_HEADER include/spdk/accel.h 00:03:14.868 TEST_HEADER include/spdk/accel_module.h 00:03:14.868 TEST_HEADER include/spdk/assert.h 00:03:14.868 TEST_HEADER include/spdk/barrier.h 00:03:14.868 TEST_HEADER include/spdk/base64.h 00:03:14.868 TEST_HEADER include/spdk/bdev.h 00:03:14.868 TEST_HEADER include/spdk/bdev_module.h 00:03:14.868 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.868 TEST_HEADER include/spdk/bit_array.h 00:03:14.868 TEST_HEADER include/spdk/bit_pool.h 00:03:14.868 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.868 CC app/nvmf_tgt/nvmf_main.o 00:03:14.868 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.868 TEST_HEADER include/spdk/blobfs.h 00:03:14.868 TEST_HEADER include/spdk/blob.h 00:03:14.868 TEST_HEADER include/spdk/conf.h 00:03:14.868 TEST_HEADER include/spdk/config.h 00:03:14.868 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.868 TEST_HEADER include/spdk/cpuset.h 00:03:14.868 TEST_HEADER include/spdk/crc16.h 00:03:14.868 TEST_HEADER include/spdk/crc32.h 00:03:14.868 TEST_HEADER include/spdk/crc64.h 00:03:14.868 TEST_HEADER include/spdk/dif.h 00:03:14.868 TEST_HEADER include/spdk/dma.h 00:03:14.868 TEST_HEADER include/spdk/endian.h 00:03:14.868 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.868 TEST_HEADER include/spdk/env.h 00:03:14.868 TEST_HEADER include/spdk/event.h 00:03:14.868 TEST_HEADER include/spdk/fd_group.h 00:03:14.868 TEST_HEADER include/spdk/fd.h 00:03:14.868 TEST_HEADER include/spdk/file.h 00:03:14.868 TEST_HEADER include/spdk/ftl.h 00:03:14.868 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.868 CC examples/util/zipf/zipf.o 00:03:14.868 TEST_HEADER include/spdk/hexlify.h 00:03:14.868 TEST_HEADER include/spdk/histogram_data.h 00:03:14.868 TEST_HEADER include/spdk/idxd.h 00:03:14.868 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.868 CC test/thread/poller_perf/poller_perf.o 00:03:14.868 TEST_HEADER include/spdk/init.h 00:03:14.868 TEST_HEADER include/spdk/ioat.h 00:03:15.135 CC examples/ioat/perf/perf.o 00:03:15.135 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.135 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.135 TEST_HEADER include/spdk/json.h 00:03:15.135 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.135 TEST_HEADER include/spdk/keyring.h 00:03:15.135 TEST_HEADER include/spdk/keyring_module.h 00:03:15.135 TEST_HEADER include/spdk/likely.h 00:03:15.135 TEST_HEADER include/spdk/log.h 00:03:15.135 TEST_HEADER include/spdk/lvol.h 00:03:15.135 TEST_HEADER include/spdk/memory.h 00:03:15.135 TEST_HEADER include/spdk/mmio.h 00:03:15.135 TEST_HEADER include/spdk/nbd.h 00:03:15.135 TEST_HEADER include/spdk/net.h 00:03:15.135 TEST_HEADER include/spdk/notify.h 00:03:15.135 TEST_HEADER include/spdk/nvme.h 00:03:15.135 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.135 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.135 CC test/dma/test_dma/test_dma.o 00:03:15.135 CC test/app/bdev_svc/bdev_svc.o 00:03:15.135 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.135 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.135 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.135 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.135 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.135 TEST_HEADER include/spdk/nvmf.h 00:03:15.135 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.135 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.135 TEST_HEADER include/spdk/opal.h 00:03:15.135 TEST_HEADER include/spdk/opal_spec.h 00:03:15.135 TEST_HEADER include/spdk/pci_ids.h 00:03:15.135 TEST_HEADER include/spdk/pipe.h 00:03:15.135 TEST_HEADER include/spdk/queue.h 00:03:15.135 TEST_HEADER include/spdk/reduce.h 00:03:15.135 TEST_HEADER include/spdk/rpc.h 00:03:15.135 TEST_HEADER include/spdk/scheduler.h 00:03:15.135 TEST_HEADER include/spdk/scsi.h 00:03:15.135 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.135 TEST_HEADER include/spdk/sock.h 00:03:15.135 TEST_HEADER include/spdk/stdinc.h 00:03:15.135 TEST_HEADER include/spdk/string.h 00:03:15.135 TEST_HEADER include/spdk/thread.h 00:03:15.135 TEST_HEADER include/spdk/trace.h 00:03:15.135 TEST_HEADER include/spdk/trace_parser.h 00:03:15.135 TEST_HEADER include/spdk/tree.h 00:03:15.135 TEST_HEADER include/spdk/ublk.h 00:03:15.135 TEST_HEADER include/spdk/util.h 00:03:15.135 TEST_HEADER include/spdk/uuid.h 00:03:15.135 TEST_HEADER include/spdk/version.h 00:03:15.135 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.135 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.135 TEST_HEADER include/spdk/vhost.h 00:03:15.135 TEST_HEADER include/spdk/vmd.h 00:03:15.135 TEST_HEADER include/spdk/xor.h 00:03:15.135 TEST_HEADER include/spdk/zipf.h 00:03:15.135 CXX test/cpp_headers/accel.o 00:03:15.135 LINK poller_perf 00:03:15.135 LINK nvmf_tgt 00:03:15.135 LINK interrupt_tgt 00:03:15.135 LINK zipf 00:03:15.135 LINK spdk_trace_record 00:03:15.135 LINK ioat_perf 00:03:15.135 LINK bdev_svc 00:03:15.393 CXX test/cpp_headers/accel_module.o 00:03:15.393 LINK spdk_trace 00:03:15.393 CC examples/ioat/verify/verify.o 00:03:15.393 LINK test_dma 00:03:15.393 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.393 CC app/spdk_lspci/spdk_lspci.o 00:03:15.393 CXX test/cpp_headers/assert.o 00:03:15.651 CC app/spdk_tgt/spdk_tgt.o 00:03:15.651 CC test/app/histogram_perf/histogram_perf.o 00:03:15.651 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.651 LINK spdk_lspci 00:03:15.651 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.651 LINK verify 00:03:15.651 LINK iscsi_tgt 00:03:15.651 CXX test/cpp_headers/barrier.o 00:03:15.651 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.651 LINK histogram_perf 00:03:15.909 CC test/event/event_perf/event_perf.o 00:03:15.909 LINK spdk_tgt 00:03:15.909 CXX test/cpp_headers/base64.o 00:03:15.909 CC test/event/reactor/reactor.o 00:03:15.909 LINK nvme_fuzz 00:03:15.909 LINK event_perf 00:03:15.909 LINK reactor 00:03:15.909 CC test/event/reactor_perf/reactor_perf.o 00:03:15.909 CC test/event/app_repeat/app_repeat.o 00:03:16.167 CXX test/cpp_headers/bdev.o 00:03:16.167 CC examples/thread/thread/thread_ex.o 00:03:16.167 CC app/spdk_nvme_perf/perf.o 00:03:16.167 LINK reactor_perf 00:03:16.167 LINK app_repeat 00:03:16.167 CC test/rpc_client/rpc_client_test.o 00:03:16.167 CXX test/cpp_headers/bdev_module.o 00:03:16.167 CC test/event/scheduler/scheduler.o 00:03:16.427 LINK thread 00:03:16.427 CC test/nvme/aer/aer.o 00:03:16.427 LINK mem_callbacks 00:03:16.427 LINK rpc_client_test 00:03:16.427 CXX test/cpp_headers/bdev_zone.o 00:03:16.427 LINK scheduler 00:03:16.686 CC test/accel/dif/dif.o 00:03:16.686 CC test/env/vtophys/vtophys.o 00:03:16.686 CC test/blobfs/mkfs/mkfs.o 00:03:16.686 LINK aer 00:03:16.686 CXX test/cpp_headers/bit_array.o 00:03:16.686 LINK vtophys 00:03:16.686 CC examples/sock/hello_world/hello_sock.o 00:03:16.686 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.945 LINK mkfs 00:03:16.945 CXX test/cpp_headers/bit_pool.o 00:03:16.945 LINK spdk_nvme_perf 00:03:16.945 CC test/nvme/reset/reset.o 00:03:16.945 LINK lsvmd 00:03:16.945 CC examples/idxd/perf/perf.o 00:03:16.945 LINK hello_sock 00:03:16.945 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.945 LINK dif 00:03:16.945 CXX test/cpp_headers/blob_bdev.o 00:03:17.203 CC test/nvme/sgl/sgl.o 00:03:17.203 CC app/spdk_nvme_identify/identify.o 00:03:17.203 LINK reset 00:03:17.203 CC examples/vmd/led/led.o 00:03:17.203 LINK env_dpdk_post_init 00:03:17.203 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.203 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.203 LINK idxd_perf 00:03:17.203 LINK iscsi_fuzz 00:03:17.462 LINK led 00:03:17.462 LINK sgl 00:03:17.462 CC test/app/jsoncat/jsoncat.o 00:03:17.462 CC test/env/memory/memory_ut.o 00:03:17.462 CXX test/cpp_headers/blobfs.o 00:03:17.462 LINK spdk_nvme_discover 00:03:17.462 CC test/lvol/esnap/esnap.o 00:03:17.462 CC test/app/stub/stub.o 00:03:17.462 LINK jsoncat 00:03:17.721 CXX test/cpp_headers/blob.o 00:03:17.721 CC test/nvme/e2edp/nvme_dp.o 00:03:17.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.721 CC examples/accel/perf/accel_perf.o 00:03:17.721 CC test/nvme/overhead/overhead.o 00:03:17.721 LINK stub 00:03:17.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.721 CXX test/cpp_headers/conf.o 00:03:17.721 CC test/nvme/err_injection/err_injection.o 00:03:17.979 CXX test/cpp_headers/config.o 00:03:17.979 LINK spdk_nvme_identify 00:03:17.979 LINK nvme_dp 00:03:17.979 CXX test/cpp_headers/cpuset.o 00:03:17.979 LINK overhead 00:03:17.979 LINK err_injection 00:03:18.236 CC examples/blob/hello_world/hello_blob.o 00:03:18.236 CXX test/cpp_headers/crc16.o 00:03:18.236 CC app/spdk_top/spdk_top.o 00:03:18.236 LINK accel_perf 00:03:18.236 CXX test/cpp_headers/crc32.o 00:03:18.236 LINK vhost_fuzz 00:03:18.236 CC examples/blob/cli/blobcli.o 00:03:18.236 CC test/nvme/startup/startup.o 00:03:18.236 CXX test/cpp_headers/crc64.o 00:03:18.236 CXX test/cpp_headers/dif.o 00:03:18.494 CXX test/cpp_headers/dma.o 00:03:18.494 LINK hello_blob 00:03:18.494 CC test/nvme/reserve/reserve.o 00:03:18.494 LINK startup 00:03:18.495 CXX test/cpp_headers/endian.o 00:03:18.495 LINK memory_ut 00:03:18.753 CXX test/cpp_headers/env_dpdk.o 00:03:18.753 LINK reserve 00:03:18.753 CC examples/nvme/hello_world/hello_world.o 00:03:18.753 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.753 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.753 LINK blobcli 00:03:18.753 CXX test/cpp_headers/env.o 00:03:18.753 CC test/env/pci/pci_ut.o 00:03:19.011 CC examples/nvme/reconnect/reconnect.o 00:03:19.011 LINK hello_world 00:03:19.011 CXX test/cpp_headers/event.o 00:03:19.011 CC test/nvme/simple_copy/simple_copy.o 00:03:19.011 LINK hello_bdev 00:03:19.011 LINK spdk_top 00:03:19.011 CXX test/cpp_headers/fd_group.o 00:03:19.269 CXX test/cpp_headers/fd.o 00:03:19.269 CC test/bdev/bdevio/bdevio.o 00:03:19.269 LINK simple_copy 00:03:19.269 CC test/nvme/connect_stress/connect_stress.o 00:03:19.269 LINK pci_ut 00:03:19.269 LINK reconnect 00:03:19.269 CC app/vhost/vhost.o 00:03:19.269 CXX test/cpp_headers/file.o 00:03:19.269 LINK connect_stress 00:03:19.269 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.527 CC test/nvme/boot_partition/boot_partition.o 00:03:19.527 LINK bdevperf 00:03:19.527 CXX test/cpp_headers/ftl.o 00:03:19.527 LINK vhost 00:03:19.527 CC test/nvme/compliance/nvme_compliance.o 00:03:19.527 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.527 LINK bdevio 00:03:19.527 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.527 LINK boot_partition 00:03:19.786 CXX test/cpp_headers/gpt_spec.o 00:03:19.786 LINK fused_ordering 00:03:19.786 CC test/nvme/fdp/fdp.o 00:03:19.786 LINK doorbell_aers 00:03:19.786 LINK nvme_compliance 00:03:19.786 CC app/spdk_dd/spdk_dd.o 00:03:19.786 CC test/nvme/cuse/cuse.o 00:03:19.786 CXX test/cpp_headers/hexlify.o 00:03:19.786 LINK nvme_manage 00:03:20.044 CXX test/cpp_headers/histogram_data.o 00:03:20.044 CC app/fio/nvme/fio_plugin.o 00:03:20.044 CXX test/cpp_headers/idxd.o 00:03:20.044 CXX test/cpp_headers/idxd_spec.o 00:03:20.044 LINK fdp 00:03:20.044 CC examples/nvme/arbitration/arbitration.o 00:03:20.303 CC examples/nvme/hotplug/hotplug.o 00:03:20.303 CXX test/cpp_headers/init.o 00:03:20.303 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.303 CC examples/nvme/abort/abort.o 00:03:20.303 LINK spdk_dd 00:03:20.303 CXX test/cpp_headers/ioat.o 00:03:20.303 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.303 LINK cmb_copy 00:03:20.303 LINK hotplug 00:03:20.561 LINK arbitration 00:03:20.561 LINK spdk_nvme 00:03:20.561 CXX test/cpp_headers/ioat_spec.o 00:03:20.561 CXX test/cpp_headers/iscsi_spec.o 00:03:20.561 CXX test/cpp_headers/json.o 00:03:20.561 LINK pmr_persistence 00:03:20.561 CXX test/cpp_headers/jsonrpc.o 00:03:20.561 LINK abort 00:03:20.561 CXX test/cpp_headers/keyring.o 00:03:20.561 CXX test/cpp_headers/keyring_module.o 00:03:20.819 CC app/fio/bdev/fio_plugin.o 00:03:20.819 CXX test/cpp_headers/likely.o 00:03:20.819 CXX test/cpp_headers/log.o 00:03:20.819 CXX test/cpp_headers/lvol.o 00:03:20.819 CXX test/cpp_headers/memory.o 00:03:20.819 CXX test/cpp_headers/mmio.o 00:03:20.819 CXX test/cpp_headers/nbd.o 00:03:20.819 CXX test/cpp_headers/net.o 00:03:20.819 CXX test/cpp_headers/notify.o 00:03:20.819 CXX test/cpp_headers/nvme.o 00:03:20.819 CXX test/cpp_headers/nvme_intel.o 00:03:20.819 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.077 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.077 CC examples/nvmf/nvmf/nvmf.o 00:03:21.077 CXX test/cpp_headers/nvme_spec.o 00:03:21.077 CXX test/cpp_headers/nvme_zns.o 00:03:21.077 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.077 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.077 CXX test/cpp_headers/nvmf.o 00:03:21.077 LINK cuse 00:03:21.077 CXX test/cpp_headers/nvmf_spec.o 00:03:21.077 LINK spdk_bdev 00:03:21.335 CXX test/cpp_headers/nvmf_transport.o 00:03:21.335 CXX test/cpp_headers/opal.o 00:03:21.335 CXX test/cpp_headers/opal_spec.o 00:03:21.335 CXX test/cpp_headers/pci_ids.o 00:03:21.336 CXX test/cpp_headers/pipe.o 00:03:21.336 CXX test/cpp_headers/queue.o 00:03:21.336 LINK nvmf 00:03:21.336 CXX test/cpp_headers/reduce.o 00:03:21.336 CXX test/cpp_headers/rpc.o 00:03:21.336 CXX test/cpp_headers/scheduler.o 00:03:21.336 CXX test/cpp_headers/scsi.o 00:03:21.336 CXX test/cpp_headers/scsi_spec.o 00:03:21.336 CXX test/cpp_headers/sock.o 00:03:21.594 CXX test/cpp_headers/stdinc.o 00:03:21.594 CXX test/cpp_headers/string.o 00:03:21.594 CXX test/cpp_headers/thread.o 00:03:21.594 CXX test/cpp_headers/trace.o 00:03:21.594 CXX test/cpp_headers/trace_parser.o 00:03:21.594 CXX test/cpp_headers/tree.o 00:03:21.594 CXX test/cpp_headers/ublk.o 00:03:21.594 CXX test/cpp_headers/util.o 00:03:21.594 CXX test/cpp_headers/uuid.o 00:03:21.594 CXX test/cpp_headers/version.o 00:03:21.594 CXX test/cpp_headers/vfio_user_pci.o 00:03:21.594 CXX test/cpp_headers/vfio_user_spec.o 00:03:21.594 CXX test/cpp_headers/vhost.o 00:03:21.594 CXX test/cpp_headers/vmd.o 00:03:21.594 CXX test/cpp_headers/xor.o 00:03:21.852 CXX test/cpp_headers/zipf.o 00:03:22.418 LINK esnap 00:03:22.984 00:03:22.984 real 1m2.980s 00:03:22.984 user 6m31.761s 00:03:22.984 sys 1m39.330s 00:03:22.984 07:28:48 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:22.984 ************************************ 00:03:22.984 END TEST make 00:03:22.984 ************************************ 00:03:22.984 07:28:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.984 07:28:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.984 07:28:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.984 07:28:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.984 07:28:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.984 07:28:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.984 07:28:48 -- pm/common@44 -- $ pid=5302 00:03:22.984 07:28:48 -- pm/common@50 -- $ kill -TERM 5302 00:03:22.984 07:28:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.984 07:28:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.984 07:28:48 -- pm/common@44 -- $ pid=5303 00:03:22.984 07:28:48 -- pm/common@50 -- $ kill -TERM 5303 00:03:22.984 07:28:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.984 07:28:48 -- nvmf/common.sh@7 -- # uname -s 00:03:22.984 07:28:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.984 07:28:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.984 07:28:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.984 07:28:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.984 07:28:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.984 07:28:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.984 07:28:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.984 07:28:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.984 07:28:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.984 07:28:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.984 07:28:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:03:22.984 07:28:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:03:22.984 07:28:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.984 07:28:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.984 07:28:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:22.984 07:28:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.984 07:28:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.984 07:28:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.984 07:28:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.984 07:28:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.984 07:28:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.984 07:28:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.984 07:28:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.984 07:28:48 -- paths/export.sh@5 -- # export PATH 00:03:22.984 07:28:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.984 07:28:48 -- nvmf/common.sh@47 -- # : 0 00:03:22.984 07:28:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:22.984 07:28:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:22.984 07:28:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.984 07:28:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.984 07:28:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.984 07:28:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:22.984 07:28:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:22.985 07:28:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:22.985 07:28:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.985 07:28:48 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.985 07:28:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.985 07:28:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.985 07:28:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.985 07:28:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.985 07:28:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.985 07:28:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.243 07:28:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.243 07:28:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.243 07:28:48 -- spdk/autotest.sh@48 -- # udevadm_pid=52927 00:03:23.243 07:28:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.243 07:28:48 -- pm/common@17 -- # local monitor 00:03:23.243 07:28:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.243 07:28:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.243 07:28:48 -- pm/common@25 -- # sleep 1 00:03:23.243 07:28:48 -- pm/common@21 -- # date +%s 00:03:23.243 07:28:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.243 07:28:48 -- pm/common@21 -- # date +%s 00:03:23.243 07:28:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721978928 00:03:23.243 07:28:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721978928 00:03:23.243 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721978928_collect-vmstat.pm.log 00:03:23.243 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721978928_collect-cpu-load.pm.log 00:03:24.177 07:28:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:24.177 07:28:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:24.177 07:28:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:24.177 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.177 07:28:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:24.177 07:28:49 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:24.177 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.177 07:28:49 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:24.177 07:28:49 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:24.177 07:28:49 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:24.177 07:28:49 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:24.177 07:28:49 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:24.177 07:28:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:24.177 07:28:49 -- common/autotest_common.sh@1455 -- # uname 00:03:24.177 07:28:49 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:24.177 07:28:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:24.177 07:28:49 -- common/autotest_common.sh@1475 -- # uname 00:03:24.177 07:28:49 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:24.177 07:28:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:24.177 07:28:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:24.177 07:28:49 -- spdk/autotest.sh@72 -- # hash lcov 00:03:24.177 07:28:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:24.177 07:28:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:24.177 --rc lcov_branch_coverage=1 00:03:24.177 --rc lcov_function_coverage=1 00:03:24.177 --rc genhtml_branch_coverage=1 00:03:24.177 --rc genhtml_function_coverage=1 00:03:24.177 --rc genhtml_legend=1 00:03:24.177 --rc geninfo_all_blocks=1 00:03:24.177 ' 00:03:24.177 07:28:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:24.177 --rc lcov_branch_coverage=1 00:03:24.177 --rc lcov_function_coverage=1 00:03:24.177 --rc genhtml_branch_coverage=1 00:03:24.177 --rc genhtml_function_coverage=1 00:03:24.177 --rc genhtml_legend=1 00:03:24.177 --rc geninfo_all_blocks=1 00:03:24.177 ' 00:03:24.177 07:28:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:24.177 --rc lcov_branch_coverage=1 00:03:24.177 --rc lcov_function_coverage=1 00:03:24.177 --rc genhtml_branch_coverage=1 00:03:24.177 --rc genhtml_function_coverage=1 00:03:24.177 --rc genhtml_legend=1 00:03:24.177 --rc geninfo_all_blocks=1 00:03:24.177 --no-external' 00:03:24.177 07:28:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:24.177 --rc lcov_branch_coverage=1 00:03:24.177 --rc lcov_function_coverage=1 00:03:24.177 --rc genhtml_branch_coverage=1 00:03:24.177 --rc genhtml_function_coverage=1 00:03:24.177 --rc genhtml_legend=1 00:03:24.177 --rc geninfo_all_blocks=1 00:03:24.177 --no-external' 00:03:24.177 07:28:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:24.177 lcov: LCOV version 1.14 00:03:24.177 07:28:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:39.049 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.049 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:51.281 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:51.281 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:51.282 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:51.282 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:54.564 07:29:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:54.564 07:29:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.564 07:29:19 -- common/autotest_common.sh@10 -- # set +x 00:03:54.564 07:29:19 -- spdk/autotest.sh@91 -- # rm -f 00:03:54.564 07:29:19 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.130 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:55.130 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:55.130 07:29:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:55.130 07:29:20 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:55.130 07:29:20 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:55.130 07:29:20 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:55.130 07:29:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.130 07:29:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:55.130 07:29:20 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:55.130 07:29:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.130 07:29:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:55.130 07:29:20 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:55.130 07:29:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.130 07:29:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:55.130 07:29:20 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:55.130 07:29:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.130 07:29:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:55.130 07:29:20 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:55.130 07:29:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:55.130 07:29:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.130 07:29:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:55.130 07:29:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.130 07:29:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.130 07:29:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:55.130 07:29:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:55.130 07:29:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.130 No valid GPT data, bailing 00:03:55.130 07:29:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.130 07:29:20 -- scripts/common.sh@391 -- # pt= 00:03:55.130 07:29:20 -- scripts/common.sh@392 -- # return 1 00:03:55.130 07:29:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.130 1+0 records in 00:03:55.130 1+0 records out 00:03:55.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510162 s, 206 MB/s 00:03:55.130 07:29:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.130 07:29:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.130 07:29:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:55.130 07:29:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:55.130 07:29:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.130 No valid GPT data, bailing 00:03:55.130 07:29:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:55.130 07:29:20 -- scripts/common.sh@391 -- # pt= 00:03:55.130 07:29:20 -- scripts/common.sh@392 -- # return 1 00:03:55.130 07:29:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:55.130 1+0 records in 00:03:55.130 1+0 records out 00:03:55.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00388324 s, 270 MB/s 00:03:55.130 07:29:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.130 07:29:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.130 07:29:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:55.130 07:29:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:55.130 07:29:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:55.389 No valid GPT data, bailing 00:03:55.389 07:29:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:55.389 07:29:20 -- scripts/common.sh@391 -- # pt= 00:03:55.389 07:29:20 -- scripts/common.sh@392 -- # return 1 00:03:55.389 07:29:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:55.389 1+0 records in 00:03:55.389 1+0 records out 00:03:55.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435253 s, 241 MB/s 00:03:55.389 07:29:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.389 07:29:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.389 07:29:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:55.389 07:29:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:55.389 07:29:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:55.389 No valid GPT data, bailing 00:03:55.389 07:29:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:55.389 07:29:20 -- scripts/common.sh@391 -- # pt= 00:03:55.389 07:29:20 -- scripts/common.sh@392 -- # return 1 00:03:55.389 07:29:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:55.389 1+0 records in 00:03:55.389 1+0 records out 00:03:55.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493123 s, 213 MB/s 00:03:55.389 07:29:20 -- spdk/autotest.sh@118 -- # sync 00:03:55.389 07:29:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.389 07:29:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.389 07:29:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.289 07:29:22 -- spdk/autotest.sh@124 -- # uname -s 00:03:57.289 07:29:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:57.289 07:29:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:57.289 07:29:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.289 07:29:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.289 07:29:22 -- common/autotest_common.sh@10 -- # set +x 00:03:57.289 ************************************ 00:03:57.289 START TEST setup.sh 00:03:57.289 ************************************ 00:03:57.289 07:29:22 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:57.289 * Looking for test storage... 00:03:57.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:57.289 07:29:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:57.289 07:29:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:57.289 07:29:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:57.289 07:29:22 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.289 07:29:22 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.289 07:29:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.289 ************************************ 00:03:57.289 START TEST acl 00:03:57.289 ************************************ 00:03:57.289 07:29:22 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:57.547 * Looking for test storage... 00:03:57.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:57.547 07:29:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:57.547 07:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:57.547 07:29:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:57.547 07:29:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:57.547 07:29:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:57.547 07:29:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:57.547 07:29:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:57.547 07:29:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.547 07:29:22 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.112 07:29:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:58.112 07:29:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:58.112 07:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 07:29:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:58.112 07:29:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.112 07:29:23 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.677 Hugepages 00:03:58.677 node hugesize free / total 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.677 00:03:58.677 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.677 07:29:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:58.934 07:29:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.934 07:29:24 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.934 07:29:24 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.934 07:29:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.934 ************************************ 00:03:58.934 START TEST denied 00:03:58.934 ************************************ 00:03:58.934 07:29:24 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:58.934 07:29:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:58.934 07:29:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:58.934 07:29:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:58.934 07:29:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.934 07:29:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.866 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.866 07:29:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.867 07:29:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:59.867 07:29:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.867 07:29:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.432 00:04:00.432 real 0m1.390s 00:04:00.432 user 0m0.584s 00:04:00.432 sys 0m0.751s 00:04:00.432 07:29:25 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.432 07:29:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:00.432 ************************************ 00:04:00.432 END TEST denied 00:04:00.432 ************************************ 00:04:00.432 07:29:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.432 07:29:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.432 07:29:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.432 07:29:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:00.432 ************************************ 00:04:00.432 START TEST allowed 00:04:00.432 ************************************ 00:04:00.432 07:29:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:00.432 07:29:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:00.432 07:29:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:00.432 07:29:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.432 07:29:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:00.432 07:29:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.364 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.364 07:29:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.931 00:04:01.931 real 0m1.491s 00:04:01.931 user 0m0.669s 00:04:01.931 sys 0m0.808s 00:04:01.931 07:29:27 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.931 07:29:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:01.931 ************************************ 00:04:01.931 END TEST allowed 00:04:01.931 ************************************ 00:04:01.931 00:04:01.931 real 0m4.617s 00:04:01.931 user 0m2.091s 00:04:01.931 sys 0m2.471s 00:04:01.931 07:29:27 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.931 07:29:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.931 ************************************ 00:04:01.931 END TEST acl 00:04:01.931 ************************************ 00:04:01.931 07:29:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.931 07:29:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.931 07:29:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.931 07:29:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.931 ************************************ 00:04:01.931 START TEST hugepages 00:04:01.931 ************************************ 00:04:01.931 07:29:27 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.191 * Looking for test storage... 00:04:02.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.191 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6031520 kB' 'MemAvailable: 7413780 kB' 'Buffers: 2436 kB' 'Cached: 1596548 kB' 'SwapCached: 0 kB' 'Active: 435148 kB' 'Inactive: 1267636 kB' 'Active(anon): 114288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 105672 kB' 'Mapped: 48452 kB' 'Shmem: 10488 kB' 'KReclaimable: 61404 kB' 'Slab: 133160 kB' 'SReclaimable: 61404 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6376 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.192 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.193 07:29:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.193 07:29:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.194 07:29:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.194 07:29:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.194 ************************************ 00:04:02.194 START TEST default_setup 00:04:02.194 ************************************ 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.194 07:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.021 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.021 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:03.021 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:03.021 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.021 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8122024 kB' 'MemAvailable: 9504184 kB' 'Buffers: 2436 kB' 'Cached: 1596532 kB' 'SwapCached: 0 kB' 'Active: 451980 kB' 'Inactive: 1267644 kB' 'Active(anon): 131120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122232 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133060 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71872 kB' 'KernelStack: 6372 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.022 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8122136 kB' 'MemAvailable: 9504300 kB' 'Buffers: 2436 kB' 'Cached: 1596536 kB' 'SwapCached: 0 kB' 'Active: 451840 kB' 'Inactive: 1267648 kB' 'Active(anon): 130980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122148 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133076 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 6416 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.023 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.024 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8122528 kB' 'MemAvailable: 9504692 kB' 'Buffers: 2436 kB' 'Cached: 1596536 kB' 'SwapCached: 0 kB' 'Active: 451852 kB' 'Inactive: 1267648 kB' 'Active(anon): 130992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122128 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133076 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 6384 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.025 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.026 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:03.027 nr_hugepages=1024 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.027 resv_hugepages=0 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.027 surplus_hugepages=0 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.027 anon_hugepages=0 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.027 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8122024 kB' 'MemAvailable: 9504188 kB' 'Buffers: 2436 kB' 'Cached: 1596536 kB' 'SwapCached: 0 kB' 'Active: 451844 kB' 'Inactive: 1267648 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122128 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133076 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71888 kB' 'KernelStack: 6384 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.028 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.029 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8121772 kB' 'MemUsed: 4120204 kB' 'SwapCached: 0 kB' 'Active: 452064 kB' 'Inactive: 1267648 kB' 'Active(anon): 131204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1598972 kB' 'Mapped: 48448 kB' 'AnonPages: 122364 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 133076 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.289 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.290 node0=1024 expecting 1024 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.290 00:04:03.290 real 0m1.010s 00:04:03.290 user 0m0.497s 00:04:03.290 sys 0m0.490s 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.290 07:29:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:03.290 ************************************ 00:04:03.290 END TEST default_setup 00:04:03.290 ************************************ 00:04:03.290 07:29:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:03.290 07:29:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.290 07:29:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.290 07:29:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.290 ************************************ 00:04:03.290 START TEST per_node_1G_alloc 00:04:03.290 ************************************ 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.290 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.291 07:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.553 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.553 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166880 kB' 'MemAvailable: 10549048 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452692 kB' 'Inactive: 1267652 kB' 'Active(anon): 131832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133060 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71872 kB' 'KernelStack: 6388 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.553 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.554 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166880 kB' 'MemAvailable: 10549048 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452228 kB' 'Inactive: 1267652 kB' 'Active(anon): 131368 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133060 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71872 kB' 'KernelStack: 6384 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.555 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.556 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9167484 kB' 'MemAvailable: 10549652 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452232 kB' 'Inactive: 1267652 kB' 'Active(anon): 131372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122460 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133060 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71872 kB' 'KernelStack: 6368 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.557 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.558 nr_hugepages=512 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:03.558 resv_hugepages=0 00:04:03.558 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.558 surplus_hugepages=0 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.559 anon_hugepages=0 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.559 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9167484 kB' 'MemAvailable: 10549652 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452128 kB' 'Inactive: 1267652 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133060 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71872 kB' 'KernelStack: 6352 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.819 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.820 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9168784 kB' 'MemUsed: 3073192 kB' 'SwapCached: 0 kB' 'Active: 451836 kB' 'Inactive: 1267652 kB' 'Active(anon): 130976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1598976 kB' 'Mapped: 48448 kB' 'AnonPages: 122384 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 133072 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.821 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.822 node0=512 expecting 512 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.822 00:04:03.822 real 0m0.495s 00:04:03.822 user 0m0.250s 00:04:03.822 sys 0m0.279s 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.822 07:29:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.822 ************************************ 00:04:03.822 END TEST per_node_1G_alloc 00:04:03.822 ************************************ 00:04:03.822 07:29:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.822 07:29:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.822 07:29:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.822 07:29:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.822 ************************************ 00:04:03.822 START TEST even_2G_alloc 00:04:03.822 ************************************ 00:04:03.822 07:29:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:03.822 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.822 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.822 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.822 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.823 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.084 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.084 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118796 kB' 'MemAvailable: 9500964 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1267652 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122740 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133048 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71860 kB' 'KernelStack: 6404 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.085 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118796 kB' 'MemAvailable: 9500964 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452300 kB' 'Inactive: 1267652 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122276 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133048 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71860 kB' 'KernelStack: 6356 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.086 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.087 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118796 kB' 'MemAvailable: 9500964 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452072 kB' 'Inactive: 1267652 kB' 'Active(anon): 131212 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122076 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133024 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71836 kB' 'KernelStack: 6364 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.088 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.351 nr_hugepages=1024 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.351 resv_hugepages=0 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.351 surplus_hugepages=0 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.351 anon_hugepages=0 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8119144 kB' 'MemAvailable: 9501312 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452292 kB' 'Inactive: 1267652 kB' 'Active(anon): 131432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133024 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71836 kB' 'KernelStack: 6348 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8119144 kB' 'MemUsed: 4122832 kB' 'SwapCached: 0 kB' 'Active: 452296 kB' 'Inactive: 1267652 kB' 'Active(anon): 131436 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1598976 kB' 'Mapped: 48480 kB' 'AnonPages: 122276 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 133024 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.353 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.354 node0=1024 expecting 1024 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.354 00:04:04.354 real 0m0.507s 00:04:04.354 user 0m0.250s 00:04:04.354 sys 0m0.290s 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.354 07:29:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.354 ************************************ 00:04:04.354 END TEST even_2G_alloc 00:04:04.354 ************************************ 00:04:04.354 07:29:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:04.354 07:29:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.354 07:29:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.354 07:29:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.354 ************************************ 00:04:04.354 START TEST odd_alloc 00:04:04.354 ************************************ 00:04:04.354 07:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:04.354 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:04.354 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:04.354 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.355 07:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.614 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.614 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8121004 kB' 'MemAvailable: 9503172 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452764 kB' 'Inactive: 1267652 kB' 'Active(anon): 131904 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48516 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133040 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71852 kB' 'KernelStack: 6372 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.614 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.615 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8121264 kB' 'MemAvailable: 9503432 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452136 kB' 'Inactive: 1267652 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 48516 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133044 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6356 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.616 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.878 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.879 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8121656 kB' 'MemAvailable: 9503824 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 451888 kB' 'Inactive: 1267652 kB' 'Active(anon): 131028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122184 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133040 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71852 kB' 'KernelStack: 6384 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.880 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.881 nr_hugepages=1025 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:04.881 resv_hugepages=0 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.881 surplus_hugepages=0 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.881 anon_hugepages=0 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.881 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8121656 kB' 'MemAvailable: 9503824 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452104 kB' 'Inactive: 1267652 kB' 'Active(anon): 131244 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122400 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133040 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71852 kB' 'KernelStack: 6368 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.882 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.883 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8121404 kB' 'MemUsed: 4120572 kB' 'SwapCached: 0 kB' 'Active: 452112 kB' 'Inactive: 1267652 kB' 'Active(anon): 131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1598976 kB' 'Mapped: 48448 kB' 'AnonPages: 122356 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 133056 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.884 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.885 node0=1025 expecting 1025 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:04.885 00:04:04.885 real 0m0.505s 00:04:04.885 user 0m0.247s 00:04:04.885 sys 0m0.293s 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.885 07:29:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.885 ************************************ 00:04:04.885 END TEST odd_alloc 00:04:04.885 ************************************ 00:04:04.885 07:29:30 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:04.885 07:29:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.885 07:29:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.885 07:29:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.885 ************************************ 00:04:04.885 START TEST custom_alloc 00:04:04.885 ************************************ 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.885 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.886 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.144 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.144 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178164 kB' 'MemAvailable: 10560340 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 1267652 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48500 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133060 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6372 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.144 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.407 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178420 kB' 'MemAvailable: 10560596 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452224 kB' 'Inactive: 1267652 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48500 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133060 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6356 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.408 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178420 kB' 'MemAvailable: 10560596 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 451884 kB' 'Inactive: 1267652 kB' 'Active(anon): 131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133060 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6384 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.412 nr_hugepages=512 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.412 resv_hugepages=0 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.412 surplus_hugepages=0 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.412 anon_hugepages=0 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178420 kB' 'MemAvailable: 10560596 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 451912 kB' 'Inactive: 1267652 kB' 'Active(anon): 131052 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133060 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6368 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.412 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.413 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9178420 kB' 'MemUsed: 3063556 kB' 'SwapCached: 0 kB' 'Active: 452120 kB' 'Inactive: 1267652 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1598976 kB' 'Mapped: 48448 kB' 'AnonPages: 122372 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 133060 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.414 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.415 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.416 node0=512 expecting 512 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.416 00:04:05.416 real 0m0.526s 00:04:05.416 user 0m0.266s 00:04:05.416 sys 0m0.293s 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.416 07:29:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.416 ************************************ 00:04:05.416 END TEST custom_alloc 00:04:05.416 ************************************ 00:04:05.416 07:29:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:05.416 07:29:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.416 07:29:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.416 07:29:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.416 ************************************ 00:04:05.416 START TEST no_shrink_alloc 00:04:05.416 ************************************ 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.416 07:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.937 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.937 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.937 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127400 kB' 'MemAvailable: 9509576 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452476 kB' 'Inactive: 1267652 kB' 'Active(anon): 131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133032 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6392 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.938 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.939 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127400 kB' 'MemAvailable: 9509576 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452380 kB' 'Inactive: 1267652 kB' 'Active(anon): 131520 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122632 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133028 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6368 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.941 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127400 kB' 'MemAvailable: 9509576 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452208 kB' 'Inactive: 1267652 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122480 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133028 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6384 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.946 nr_hugepages=1024 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.946 resv_hugepages=0 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.946 surplus_hugepages=0 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.946 anon_hugepages=0 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128032 kB' 'MemAvailable: 9510208 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 452164 kB' 'Inactive: 1267652 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 133028 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6368 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128032 kB' 'MemUsed: 4113944 kB' 'SwapCached: 0 kB' 'Active: 452144 kB' 'Inactive: 1267652 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1598976 kB' 'Mapped: 48448 kB' 'AnonPages: 122396 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 133028 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 71824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.951 node0=1024 expecting 1024 00:04:05.951 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.952 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.952 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.952 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.952 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.952 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.952 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.244 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.244 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.244 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.244 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.244 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127464 kB' 'MemAvailable: 9509636 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 448796 kB' 'Inactive: 1267652 kB' 'Active(anon): 127936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118804 kB' 'Mapped: 47924 kB' 'Shmem: 10464 kB' 'KReclaimable: 61196 kB' 'Slab: 132900 kB' 'SReclaimable: 61196 kB' 'SUnreclaim: 71704 kB' 'KernelStack: 6376 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.507 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.508 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127212 kB' 'MemAvailable: 9509384 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 448252 kB' 'Inactive: 1267652 kB' 'Active(anon): 127392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 47864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61196 kB' 'Slab: 132900 kB' 'SReclaimable: 61196 kB' 'SUnreclaim: 71704 kB' 'KernelStack: 6328 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.509 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.510 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127212 kB' 'MemAvailable: 9509384 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 448180 kB' 'Inactive: 1267652 kB' 'Active(anon): 127320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118424 kB' 'Mapped: 47804 kB' 'Shmem: 10464 kB' 'KReclaimable: 61196 kB' 'Slab: 132896 kB' 'SReclaimable: 61196 kB' 'SUnreclaim: 71700 kB' 'KernelStack: 6280 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.511 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.512 nr_hugepages=1024 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.512 resv_hugepages=0 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.512 surplus_hugepages=0 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.512 anon_hugepages=0 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.512 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126960 kB' 'MemAvailable: 9509132 kB' 'Buffers: 2436 kB' 'Cached: 1596540 kB' 'SwapCached: 0 kB' 'Active: 448120 kB' 'Inactive: 1267652 kB' 'Active(anon): 127260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118368 kB' 'Mapped: 47708 kB' 'Shmem: 10464 kB' 'KReclaimable: 61196 kB' 'Slab: 132892 kB' 'SReclaimable: 61196 kB' 'SUnreclaim: 71696 kB' 'KernelStack: 6272 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.513 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.514 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127220 kB' 'MemUsed: 4114756 kB' 'SwapCached: 0 kB' 'Active: 447908 kB' 'Inactive: 1267652 kB' 'Active(anon): 127048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1598976 kB' 'Mapped: 47708 kB' 'AnonPages: 118212 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61196 kB' 'Slab: 132888 kB' 'SReclaimable: 61196 kB' 'SUnreclaim: 71692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.515 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.516 node0=1024 expecting 1024 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.516 00:04:06.516 real 0m1.061s 00:04:06.516 user 0m0.524s 00:04:06.516 sys 0m0.585s 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.516 07:29:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.516 ************************************ 00:04:06.516 END TEST no_shrink_alloc 00:04:06.516 ************************************ 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.516 07:29:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.516 00:04:06.516 real 0m4.543s 00:04:06.516 user 0m2.174s 00:04:06.516 sys 0m2.506s 00:04:06.516 07:29:32 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.516 07:29:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.516 ************************************ 00:04:06.516 END TEST hugepages 00:04:06.516 ************************************ 00:04:06.516 07:29:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:06.516 07:29:32 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.516 07:29:32 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.516 07:29:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.516 ************************************ 00:04:06.516 START TEST driver 00:04:06.516 ************************************ 00:04:06.516 07:29:32 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:06.774 * Looking for test storage... 00:04:06.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:06.774 07:29:32 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:06.774 07:29:32 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.774 07:29:32 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.339 07:29:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:07.339 07:29:32 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.339 07:29:32 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.339 07:29:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.339 ************************************ 00:04:07.339 START TEST guess_driver 00:04:07.339 ************************************ 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:07.339 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:07.339 Looking for driver=uio_pci_generic 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.339 07:29:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.905 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:07.905 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:07.905 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.905 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.905 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:07.905 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.163 07:29:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.728 00:04:08.728 real 0m1.411s 00:04:08.728 user 0m0.506s 00:04:08.728 sys 0m0.904s 00:04:08.728 07:29:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.728 07:29:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.728 ************************************ 00:04:08.728 END TEST guess_driver 00:04:08.728 ************************************ 00:04:08.728 00:04:08.728 real 0m2.095s 00:04:08.728 user 0m0.753s 00:04:08.728 sys 0m1.399s 00:04:08.728 07:29:34 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.728 07:29:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.728 ************************************ 00:04:08.728 END TEST driver 00:04:08.728 ************************************ 00:04:08.728 07:29:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.728 07:29:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.728 07:29:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.728 07:29:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.728 ************************************ 00:04:08.728 START TEST devices 00:04:08.728 ************************************ 00:04:08.728 07:29:34 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.728 * Looking for test storage... 00:04:08.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.728 07:29:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.728 07:29:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:08.728 07:29:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.728 07:29:34 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:09.661 07:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:09.661 No valid GPT data, bailing 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:09.661 07:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:09.661 07:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:09.661 No valid GPT data, bailing 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.661 07:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:09.661 07:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:09.661 07:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:09.661 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.662 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:09.662 07:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:09.662 07:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:09.662 No valid GPT data, bailing 00:04:09.662 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:09.920 07:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:09.920 07:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:09.920 07:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:09.920 No valid GPT data, bailing 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.920 07:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:09.920 07:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:09.920 07:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:09.920 07:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:09.920 07:29:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:09.920 07:29:35 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.921 07:29:35 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.921 07:29:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.921 ************************************ 00:04:09.921 START TEST nvme_mount 00:04:09.921 ************************************ 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.921 07:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.855 Creating new GPT entries in memory. 00:04:10.855 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.855 other utilities. 00:04:10.855 07:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.855 07:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.855 07:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.855 07:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.855 07:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:12.227 Creating new GPT entries in memory. 00:04:12.227 The operation has completed successfully. 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57134 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.227 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.485 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.485 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.486 07:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.743 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:12.743 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:12.743 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.743 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.743 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.744 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.002 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.002 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.002 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.002 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.002 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.002 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:13.260 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.261 07:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.519 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.519 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:13.519 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.519 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.519 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.519 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.778 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.778 00:04:13.778 real 0m3.974s 00:04:13.778 user 0m0.698s 00:04:13.778 sys 0m1.013s 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.778 07:29:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.778 ************************************ 00:04:13.778 END TEST nvme_mount 00:04:13.778 ************************************ 00:04:13.778 07:29:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.778 07:29:39 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.778 07:29:39 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.778 07:29:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.778 ************************************ 00:04:13.778 START TEST dm_mount 00:04:13.778 ************************************ 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.778 07:29:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:15.152 Creating new GPT entries in memory. 00:04:15.152 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.152 other utilities. 00:04:15.152 07:29:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.152 07:29:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.152 07:29:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.152 07:29:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.152 07:29:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:16.086 Creating new GPT entries in memory. 00:04:16.086 The operation has completed successfully. 00:04:16.086 07:29:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.086 07:29:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.086 07:29:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.086 07:29:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.086 07:29:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:17.021 The operation has completed successfully. 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57570 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.021 07:29:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.311 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.569 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.570 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.570 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.570 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:17.570 07:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.570 07:29:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.828 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:18.086 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:18.086 00:04:18.086 real 0m4.200s 00:04:18.086 user 0m0.456s 00:04:18.086 sys 0m0.704s 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.086 07:29:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:18.086 ************************************ 00:04:18.086 END TEST dm_mount 00:04:18.086 ************************************ 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.086 07:29:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.345 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.345 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.345 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.345 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.345 07:29:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:18.345 00:04:18.345 real 0m9.678s 00:04:18.345 user 0m1.794s 00:04:18.345 sys 0m2.284s 00:04:18.345 07:29:43 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.345 07:29:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.345 ************************************ 00:04:18.345 END TEST devices 00:04:18.345 ************************************ 00:04:18.603 00:04:18.603 real 0m21.218s 00:04:18.603 user 0m6.919s 00:04:18.603 sys 0m8.829s 00:04:18.603 07:29:43 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.603 07:29:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.603 ************************************ 00:04:18.603 END TEST setup.sh 00:04:18.603 ************************************ 00:04:18.603 07:29:43 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:19.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.169 Hugepages 00:04:19.169 node hugesize free / total 00:04:19.169 node0 1048576kB 0 / 0 00:04:19.169 node0 2048kB 2048 / 2048 00:04:19.169 00:04:19.169 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.169 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:19.427 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:19.427 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:19.427 07:29:44 -- spdk/autotest.sh@130 -- # uname -s 00:04:19.427 07:29:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:19.427 07:29:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:19.427 07:29:44 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.251 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.251 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.251 07:29:45 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:21.186 07:29:46 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:21.186 07:29:46 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:21.186 07:29:46 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.186 07:29:46 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:21.186 07:29:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:21.186 07:29:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:21.186 07:29:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.186 07:29:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:21.186 07:29:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:21.186 07:29:46 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:21.186 07:29:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:21.186 07:29:46 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.752 Waiting for block devices as requested 00:04:21.752 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.752 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.752 07:29:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:22.010 07:29:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:22.010 07:29:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:22.010 07:29:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:22.010 07:29:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:22.010 07:29:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1557 -- # continue 00:04:22.010 07:29:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:22.010 07:29:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:22.010 07:29:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:22.010 07:29:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:22.010 07:29:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:22.010 07:29:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:22.010 07:29:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:22.010 07:29:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:22.010 07:29:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:22.010 07:29:47 -- common/autotest_common.sh@1557 -- # continue 00:04:22.010 07:29:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:22.010 07:29:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.010 07:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:22.010 07:29:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:22.010 07:29:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.010 07:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:22.010 07:29:47 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.574 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.832 07:29:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:22.832 07:29:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.832 07:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:22.832 07:29:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:22.832 07:29:48 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:22.832 07:29:48 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.832 07:29:48 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:22.832 07:29:48 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:22.832 07:29:48 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:22.832 07:29:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:22.832 07:29:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:22.832 07:29:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.832 07:29:48 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.832 07:29:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:22.832 07:29:48 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:22.832 07:29:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:22.832 07:29:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:22.832 07:29:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.832 07:29:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:22.832 07:29:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.832 07:29:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:22.832 07:29:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.832 07:29:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:22.832 07:29:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.832 07:29:48 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:22.832 07:29:48 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:22.832 07:29:48 -- common/autotest_common.sh@1593 -- # return 0 00:04:22.832 07:29:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:22.832 07:29:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:22.832 07:29:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:22.832 07:29:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:22.832 07:29:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:22.832 07:29:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.832 07:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:22.832 07:29:48 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:22.832 07:29:48 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.832 07:29:48 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.832 07:29:48 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.832 07:29:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.832 07:29:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.832 07:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:22.832 ************************************ 00:04:22.832 START TEST env 00:04:22.832 ************************************ 00:04:22.832 07:29:48 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:23.090 * Looking for test storage... 00:04:23.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:23.090 07:29:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.090 07:29:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.090 07:29:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.090 07:29:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.090 ************************************ 00:04:23.090 START TEST env_memory 00:04:23.090 ************************************ 00:04:23.090 07:29:48 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.090 00:04:23.090 00:04:23.090 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.090 http://cunit.sourceforge.net/ 00:04:23.090 00:04:23.090 00:04:23.090 Suite: memory 00:04:23.090 Test: alloc and free memory map ...[2024-07-26 07:29:48.545725] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.090 passed 00:04:23.090 Test: mem map translation ...[2024-07-26 07:29:48.576778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.090 [2024-07-26 07:29:48.576827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.090 [2024-07-26 07:29:48.576885] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.090 [2024-07-26 07:29:48.576897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.090 passed 00:04:23.090 Test: mem map registration ...[2024-07-26 07:29:48.640611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:23.090 [2024-07-26 07:29:48.640658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:23.090 passed 00:04:23.348 Test: mem map adjacent registrations ...passed 00:04:23.348 00:04:23.348 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.348 suites 1 1 n/a 0 0 00:04:23.348 tests 4 4 4 0 0 00:04:23.348 asserts 152 152 152 0 n/a 00:04:23.348 00:04:23.348 Elapsed time = 0.214 seconds 00:04:23.348 00:04:23.348 real 0m0.231s 00:04:23.348 user 0m0.213s 00:04:23.348 sys 0m0.013s 00:04:23.348 07:29:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.348 07:29:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.348 ************************************ 00:04:23.348 END TEST env_memory 00:04:23.348 ************************************ 00:04:23.348 07:29:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.348 07:29:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.348 07:29:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.348 07:29:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.348 ************************************ 00:04:23.348 START TEST env_vtophys 00:04:23.348 ************************************ 00:04:23.348 07:29:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.348 EAL: lib.eal log level changed from notice to debug 00:04:23.348 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.348 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.348 EAL: Maximum logical cores by configuration: 128 00:04:23.348 EAL: Detected CPU lcores: 10 00:04:23.348 EAL: Detected NUMA nodes: 1 00:04:23.348 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.348 EAL: Detected shared linkage of DPDK 00:04:23.348 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.348 EAL: Selected IOVA mode 'PA' 00:04:23.348 EAL: Probing VFIO support... 00:04:23.348 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.348 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.348 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.348 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.348 EAL: Setting up physically contiguous memory... 00:04:23.348 EAL: Setting maximum number of open files to 524288 00:04:23.348 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.348 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.348 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.348 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.348 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.348 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.348 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.348 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.348 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.348 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.348 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.348 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.348 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.348 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.348 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.348 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.348 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.348 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.348 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.348 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.348 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.348 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.348 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.348 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.348 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.349 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.349 EAL: Hugepages will be freed exactly as allocated. 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: TSC frequency is ~2200000 KHz 00:04:23.349 EAL: Main lcore 0 is ready (tid=7f1c500c5a00;cpuset=[0]) 00:04:23.349 EAL: Trying to obtain current memory policy. 00:04:23.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.349 EAL: Restoring previous memory policy: 0 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.349 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.349 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.349 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.349 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.349 00:04:23.349 00:04:23.349 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.349 http://cunit.sourceforge.net/ 00:04:23.349 00:04:23.349 00:04:23.349 Suite: components_suite 00:04:23.349 Test: vtophys_malloc_test ...passed 00:04:23.349 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.349 EAL: Restoring previous memory policy: 4 00:04:23.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.349 EAL: Trying to obtain current memory policy. 00:04:23.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.349 EAL: Restoring previous memory policy: 4 00:04:23.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.349 EAL: Trying to obtain current memory policy. 00:04:23.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.349 EAL: Restoring previous memory policy: 4 00:04:23.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.349 EAL: request: mp_malloc_sync 00:04:23.349 EAL: No shared files mode enabled, IPC is disabled 00:04:23.349 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.349 EAL: Trying to obtain current memory policy. 00:04:23.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.607 EAL: Restoring previous memory policy: 4 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.607 EAL: Trying to obtain current memory policy. 00:04:23.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.607 EAL: Restoring previous memory policy: 4 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.607 EAL: Trying to obtain current memory policy. 00:04:23.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.607 EAL: Restoring previous memory policy: 4 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.607 EAL: Trying to obtain current memory policy. 00:04:23.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.607 EAL: Restoring previous memory policy: 4 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.607 EAL: request: mp_malloc_sync 00:04:23.607 EAL: No shared files mode enabled, IPC is disabled 00:04:23.607 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.607 EAL: Trying to obtain current memory policy. 00:04:23.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.865 EAL: Restoring previous memory policy: 4 00:04:23.865 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.865 EAL: request: mp_malloc_sync 00:04:23.865 EAL: No shared files mode enabled, IPC is disabled 00:04:23.865 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.865 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.865 EAL: request: mp_malloc_sync 00:04:23.865 EAL: No shared files mode enabled, IPC is disabled 00:04:23.865 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.865 EAL: Trying to obtain current memory policy. 00:04:23.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.123 EAL: Restoring previous memory policy: 4 00:04:24.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.123 EAL: request: mp_malloc_sync 00:04:24.123 EAL: No shared files mode enabled, IPC is disabled 00:04:24.123 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.381 EAL: request: mp_malloc_sync 00:04:24.381 EAL: No shared files mode enabled, IPC is disabled 00:04:24.381 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.381 EAL: Trying to obtain current memory policy. 00:04:24.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.639 EAL: Restoring previous memory policy: 4 00:04:24.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.639 EAL: request: mp_malloc_sync 00:04:24.639 EAL: No shared files mode enabled, IPC is disabled 00:04:24.639 EAL: Heap on socket 0 was expanded by 1026MB 00:04:25.204 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.462 passed 00:04:25.462 00:04:25.462 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.462 suites 1 1 n/a 0 0 00:04:25.462 tests 2 2 2 0 0 00:04:25.462 asserts 5183 5183 5183 0 n/a 00:04:25.462 00:04:25.462 Elapsed time = 1.839 seconds 00:04:25.462 EAL: request: mp_malloc_sync 00:04:25.462 EAL: No shared files mode enabled, IPC is disabled 00:04:25.462 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:25.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.462 EAL: request: mp_malloc_sync 00:04:25.462 EAL: No shared files mode enabled, IPC is disabled 00:04:25.462 EAL: Heap on socket 0 was shrunk by 2MB 00:04:25.462 EAL: No shared files mode enabled, IPC is disabled 00:04:25.462 EAL: No shared files mode enabled, IPC is disabled 00:04:25.462 EAL: No shared files mode enabled, IPC is disabled 00:04:25.462 00:04:25.462 real 0m2.047s 00:04:25.462 user 0m1.185s 00:04:25.462 sys 0m0.723s 00:04:25.462 07:29:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.462 07:29:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:25.463 ************************************ 00:04:25.463 END TEST env_vtophys 00:04:25.463 ************************************ 00:04:25.463 07:29:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:25.463 07:29:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.463 07:29:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.463 07:29:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.463 ************************************ 00:04:25.463 START TEST env_pci 00:04:25.463 ************************************ 00:04:25.463 07:29:50 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:25.463 00:04:25.463 00:04:25.463 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.463 http://cunit.sourceforge.net/ 00:04:25.463 00:04:25.463 00:04:25.463 Suite: pci 00:04:25.463 Test: pci_hook ...[2024-07-26 07:29:50.896220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58769 has claimed it 00:04:25.463 passed 00:04:25.463 00:04:25.463 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.463 suites 1 1 n/a 0 0 00:04:25.463 tests 1 1 1 0 0 00:04:25.463 asserts 25 25 25 0 n/a 00:04:25.463 00:04:25.463 Elapsed time = 0.003 seconds 00:04:25.463 EAL: Cannot find device (10000:00:01.0) 00:04:25.463 EAL: Failed to attach device on primary process 00:04:25.463 00:04:25.463 real 0m0.024s 00:04:25.463 user 0m0.013s 00:04:25.463 sys 0m0.010s 00:04:25.463 07:29:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.463 07:29:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:25.463 ************************************ 00:04:25.463 END TEST env_pci 00:04:25.463 ************************************ 00:04:25.463 07:29:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:25.463 07:29:50 env -- env/env.sh@15 -- # uname 00:04:25.463 07:29:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:25.463 07:29:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:25.463 07:29:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.463 07:29:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:25.463 07:29:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.463 07:29:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.463 ************************************ 00:04:25.463 START TEST env_dpdk_post_init 00:04:25.463 ************************************ 00:04:25.463 07:29:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.463 EAL: Detected CPU lcores: 10 00:04:25.463 EAL: Detected NUMA nodes: 1 00:04:25.463 EAL: Detected shared linkage of DPDK 00:04:25.463 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.463 EAL: Selected IOVA mode 'PA' 00:04:25.721 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.721 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:25.721 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:25.721 Starting DPDK initialization... 00:04:25.721 Starting SPDK post initialization... 00:04:25.721 SPDK NVMe probe 00:04:25.721 Attaching to 0000:00:10.0 00:04:25.721 Attaching to 0000:00:11.0 00:04:25.721 Attached to 0000:00:10.0 00:04:25.721 Attached to 0000:00:11.0 00:04:25.721 Cleaning up... 00:04:25.721 00:04:25.721 real 0m0.182s 00:04:25.721 user 0m0.047s 00:04:25.721 sys 0m0.035s 00:04:25.721 07:29:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.721 07:29:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 ************************************ 00:04:25.721 END TEST env_dpdk_post_init 00:04:25.721 ************************************ 00:04:25.721 07:29:51 env -- env/env.sh@26 -- # uname 00:04:25.721 07:29:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:25.721 07:29:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.721 07:29:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.721 07:29:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.721 07:29:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 ************************************ 00:04:25.721 START TEST env_mem_callbacks 00:04:25.721 ************************************ 00:04:25.721 07:29:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.721 EAL: Detected CPU lcores: 10 00:04:25.721 EAL: Detected NUMA nodes: 1 00:04:25.721 EAL: Detected shared linkage of DPDK 00:04:25.721 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.721 EAL: Selected IOVA mode 'PA' 00:04:25.979 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.979 00:04:25.979 00:04:25.979 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.979 http://cunit.sourceforge.net/ 00:04:25.979 00:04:25.979 00:04:25.979 Suite: memory 00:04:25.979 Test: test ... 00:04:25.979 register 0x200000200000 2097152 00:04:25.979 malloc 3145728 00:04:25.979 register 0x200000400000 4194304 00:04:25.979 buf 0x200000500000 len 3145728 PASSED 00:04:25.979 malloc 64 00:04:25.979 buf 0x2000004fff40 len 64 PASSED 00:04:25.979 malloc 4194304 00:04:25.979 register 0x200000800000 6291456 00:04:25.979 buf 0x200000a00000 len 4194304 PASSED 00:04:25.979 free 0x200000500000 3145728 00:04:25.979 free 0x2000004fff40 64 00:04:25.979 unregister 0x200000400000 4194304 PASSED 00:04:25.979 free 0x200000a00000 4194304 00:04:25.979 unregister 0x200000800000 6291456 PASSED 00:04:25.979 malloc 8388608 00:04:25.979 register 0x200000400000 10485760 00:04:25.979 buf 0x200000600000 len 8388608 PASSED 00:04:25.979 free 0x200000600000 8388608 00:04:25.979 unregister 0x200000400000 10485760 PASSED 00:04:25.979 passed 00:04:25.979 00:04:25.979 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.979 suites 1 1 n/a 0 0 00:04:25.979 tests 1 1 1 0 0 00:04:25.979 asserts 15 15 15 0 n/a 00:04:25.979 00:04:25.979 Elapsed time = 0.009 seconds 00:04:25.979 00:04:25.979 real 0m0.144s 00:04:25.979 user 0m0.021s 00:04:25.979 sys 0m0.022s 00:04:25.979 07:29:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.979 07:29:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 ************************************ 00:04:25.979 END TEST env_mem_callbacks 00:04:25.979 ************************************ 00:04:25.979 00:04:25.979 real 0m2.969s 00:04:25.979 user 0m1.588s 00:04:25.979 sys 0m1.010s 00:04:25.979 07:29:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.979 07:29:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 ************************************ 00:04:25.979 END TEST env 00:04:25.979 ************************************ 00:04:25.979 07:29:51 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.979 07:29:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.979 07:29:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.979 07:29:51 -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 ************************************ 00:04:25.979 START TEST rpc 00:04:25.979 ************************************ 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.979 * Looking for test storage... 00:04:25.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.979 07:29:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58879 00:04:25.979 07:29:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.979 07:29:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:25.979 07:29:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58879 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 58879 ']' 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.979 07:29:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.237 [2024-07-26 07:29:51.582213] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:26.237 [2024-07-26 07:29:51.582323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:04:26.237 [2024-07-26 07:29:51.722760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.495 [2024-07-26 07:29:51.848196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.495 [2024-07-26 07:29:51.848282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58879' to capture a snapshot of events at runtime. 00:04:26.496 [2024-07-26 07:29:51.848311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.496 [2024-07-26 07:29:51.848321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.496 [2024-07-26 07:29:51.848328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58879 for offline analysis/debug. 00:04:26.496 [2024-07-26 07:29:51.848364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.496 [2024-07-26 07:29:51.925120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:27.061 07:29:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.061 07:29:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:27.061 07:29:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.061 07:29:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.061 07:29:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.061 07:29:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.061 07:29:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.061 07:29:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.061 07:29:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.061 ************************************ 00:04:27.061 START TEST rpc_integrity 00:04:27.061 ************************************ 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.061 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.061 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.061 { 00:04:27.061 "name": "Malloc0", 00:04:27.061 "aliases": [ 00:04:27.061 "005176ac-bcdd-49d3-bc33-2f5e5d09f512" 00:04:27.061 ], 00:04:27.061 "product_name": "Malloc disk", 00:04:27.061 "block_size": 512, 00:04:27.061 "num_blocks": 16384, 00:04:27.061 "uuid": "005176ac-bcdd-49d3-bc33-2f5e5d09f512", 00:04:27.061 "assigned_rate_limits": { 00:04:27.061 "rw_ios_per_sec": 0, 00:04:27.061 "rw_mbytes_per_sec": 0, 00:04:27.061 "r_mbytes_per_sec": 0, 00:04:27.061 "w_mbytes_per_sec": 0 00:04:27.061 }, 00:04:27.061 "claimed": false, 00:04:27.061 "zoned": false, 00:04:27.061 "supported_io_types": { 00:04:27.061 "read": true, 00:04:27.061 "write": true, 00:04:27.061 "unmap": true, 00:04:27.061 "flush": true, 00:04:27.061 "reset": true, 00:04:27.061 "nvme_admin": false, 00:04:27.061 "nvme_io": false, 00:04:27.061 "nvme_io_md": false, 00:04:27.061 "write_zeroes": true, 00:04:27.061 "zcopy": true, 00:04:27.061 "get_zone_info": false, 00:04:27.061 "zone_management": false, 00:04:27.061 "zone_append": false, 00:04:27.061 "compare": false, 00:04:27.061 "compare_and_write": false, 00:04:27.061 "abort": true, 00:04:27.061 "seek_hole": false, 00:04:27.061 "seek_data": false, 00:04:27.061 "copy": true, 00:04:27.061 "nvme_iov_md": false 00:04:27.061 }, 00:04:27.061 "memory_domains": [ 00:04:27.061 { 00:04:27.061 "dma_device_id": "system", 00:04:27.061 "dma_device_type": 1 00:04:27.061 }, 00:04:27.061 { 00:04:27.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.061 "dma_device_type": 2 00:04:27.061 } 00:04:27.061 ], 00:04:27.061 "driver_specific": {} 00:04:27.061 } 00:04:27.061 ]' 00:04:27.320 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.320 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.320 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.320 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.320 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 [2024-07-26 07:29:52.710525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.320 [2024-07-26 07:29:52.710577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.320 [2024-07-26 07:29:52.710598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa68da0 00:04:27.320 [2024-07-26 07:29:52.710609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.320 [2024-07-26 07:29:52.712157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.320 [2024-07-26 07:29:52.712209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.320 Passthru0 00:04:27.320 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.320 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.320 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.320 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.320 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.320 { 00:04:27.320 "name": "Malloc0", 00:04:27.320 "aliases": [ 00:04:27.320 "005176ac-bcdd-49d3-bc33-2f5e5d09f512" 00:04:27.320 ], 00:04:27.320 "product_name": "Malloc disk", 00:04:27.320 "block_size": 512, 00:04:27.320 "num_blocks": 16384, 00:04:27.320 "uuid": "005176ac-bcdd-49d3-bc33-2f5e5d09f512", 00:04:27.320 "assigned_rate_limits": { 00:04:27.320 "rw_ios_per_sec": 0, 00:04:27.320 "rw_mbytes_per_sec": 0, 00:04:27.320 "r_mbytes_per_sec": 0, 00:04:27.320 "w_mbytes_per_sec": 0 00:04:27.320 }, 00:04:27.320 "claimed": true, 00:04:27.320 "claim_type": "exclusive_write", 00:04:27.320 "zoned": false, 00:04:27.320 "supported_io_types": { 00:04:27.320 "read": true, 00:04:27.320 "write": true, 00:04:27.320 "unmap": true, 00:04:27.320 "flush": true, 00:04:27.320 "reset": true, 00:04:27.320 "nvme_admin": false, 00:04:27.320 "nvme_io": false, 00:04:27.320 "nvme_io_md": false, 00:04:27.320 "write_zeroes": true, 00:04:27.320 "zcopy": true, 00:04:27.320 "get_zone_info": false, 00:04:27.320 "zone_management": false, 00:04:27.320 "zone_append": false, 00:04:27.320 "compare": false, 00:04:27.320 "compare_and_write": false, 00:04:27.320 "abort": true, 00:04:27.320 "seek_hole": false, 00:04:27.320 "seek_data": false, 00:04:27.320 "copy": true, 00:04:27.320 "nvme_iov_md": false 00:04:27.320 }, 00:04:27.320 "memory_domains": [ 00:04:27.320 { 00:04:27.320 "dma_device_id": "system", 00:04:27.320 "dma_device_type": 1 00:04:27.320 }, 00:04:27.320 { 00:04:27.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.320 "dma_device_type": 2 00:04:27.320 } 00:04:27.320 ], 00:04:27.320 "driver_specific": {} 00:04:27.320 }, 00:04:27.320 { 00:04:27.320 "name": "Passthru0", 00:04:27.320 "aliases": [ 00:04:27.320 "c806adf7-9f71-55d8-91e6-998143aa3b13" 00:04:27.320 ], 00:04:27.320 "product_name": "passthru", 00:04:27.320 "block_size": 512, 00:04:27.320 "num_blocks": 16384, 00:04:27.320 "uuid": "c806adf7-9f71-55d8-91e6-998143aa3b13", 00:04:27.320 "assigned_rate_limits": { 00:04:27.320 "rw_ios_per_sec": 0, 00:04:27.321 "rw_mbytes_per_sec": 0, 00:04:27.321 "r_mbytes_per_sec": 0, 00:04:27.321 "w_mbytes_per_sec": 0 00:04:27.321 }, 00:04:27.321 "claimed": false, 00:04:27.321 "zoned": false, 00:04:27.321 "supported_io_types": { 00:04:27.321 "read": true, 00:04:27.321 "write": true, 00:04:27.321 "unmap": true, 00:04:27.321 "flush": true, 00:04:27.321 "reset": true, 00:04:27.321 "nvme_admin": false, 00:04:27.321 "nvme_io": false, 00:04:27.321 "nvme_io_md": false, 00:04:27.321 "write_zeroes": true, 00:04:27.321 "zcopy": true, 00:04:27.321 "get_zone_info": false, 00:04:27.321 "zone_management": false, 00:04:27.321 "zone_append": false, 00:04:27.321 "compare": false, 00:04:27.321 "compare_and_write": false, 00:04:27.321 "abort": true, 00:04:27.321 "seek_hole": false, 00:04:27.321 "seek_data": false, 00:04:27.321 "copy": true, 00:04:27.321 "nvme_iov_md": false 00:04:27.321 }, 00:04:27.321 "memory_domains": [ 00:04:27.321 { 00:04:27.321 "dma_device_id": "system", 00:04:27.321 "dma_device_type": 1 00:04:27.321 }, 00:04:27.321 { 00:04:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.321 "dma_device_type": 2 00:04:27.321 } 00:04:27.321 ], 00:04:27.321 "driver_specific": { 00:04:27.321 "passthru": { 00:04:27.321 "name": "Passthru0", 00:04:27.321 "base_bdev_name": "Malloc0" 00:04:27.321 } 00:04:27.321 } 00:04:27.321 } 00:04:27.321 ]' 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.321 07:29:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.321 00:04:27.321 real 0m0.318s 00:04:27.321 user 0m0.212s 00:04:27.321 sys 0m0.035s 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.321 ************************************ 00:04:27.321 END TEST rpc_integrity 00:04:27.321 07:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 ************************************ 00:04:27.321 07:29:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.321 07:29:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.321 07:29:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.321 07:29:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.579 ************************************ 00:04:27.579 START TEST rpc_plugins 00:04:27.579 ************************************ 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:27.579 07:29:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.579 07:29:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.579 07:29:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.579 07:29:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.579 07:29:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.579 { 00:04:27.579 "name": "Malloc1", 00:04:27.579 "aliases": [ 00:04:27.579 "1a46a61f-45ff-407b-ab5a-54e0a2d5a1b8" 00:04:27.579 ], 00:04:27.579 "product_name": "Malloc disk", 00:04:27.579 "block_size": 4096, 00:04:27.579 "num_blocks": 256, 00:04:27.579 "uuid": "1a46a61f-45ff-407b-ab5a-54e0a2d5a1b8", 00:04:27.579 "assigned_rate_limits": { 00:04:27.579 "rw_ios_per_sec": 0, 00:04:27.579 "rw_mbytes_per_sec": 0, 00:04:27.579 "r_mbytes_per_sec": 0, 00:04:27.579 "w_mbytes_per_sec": 0 00:04:27.579 }, 00:04:27.579 "claimed": false, 00:04:27.579 "zoned": false, 00:04:27.579 "supported_io_types": { 00:04:27.579 "read": true, 00:04:27.579 "write": true, 00:04:27.579 "unmap": true, 00:04:27.579 "flush": true, 00:04:27.579 "reset": true, 00:04:27.579 "nvme_admin": false, 00:04:27.579 "nvme_io": false, 00:04:27.579 "nvme_io_md": false, 00:04:27.579 "write_zeroes": true, 00:04:27.579 "zcopy": true, 00:04:27.579 "get_zone_info": false, 00:04:27.579 "zone_management": false, 00:04:27.579 "zone_append": false, 00:04:27.579 "compare": false, 00:04:27.579 "compare_and_write": false, 00:04:27.579 "abort": true, 00:04:27.579 "seek_hole": false, 00:04:27.579 "seek_data": false, 00:04:27.579 "copy": true, 00:04:27.579 "nvme_iov_md": false 00:04:27.579 }, 00:04:27.579 "memory_domains": [ 00:04:27.579 { 00:04:27.579 "dma_device_id": "system", 00:04:27.579 "dma_device_type": 1 00:04:27.579 }, 00:04:27.579 { 00:04:27.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.579 "dma_device_type": 2 00:04:27.579 } 00:04:27.579 ], 00:04:27.579 "driver_specific": {} 00:04:27.579 } 00:04:27.579 ]' 00:04:27.579 07:29:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.579 07:29:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.579 07:29:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.579 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.579 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.580 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.580 07:29:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.580 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.580 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.580 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.580 07:29:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.580 07:29:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.580 ************************************ 00:04:27.580 END TEST rpc_plugins 00:04:27.580 ************************************ 00:04:27.580 07:29:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.580 00:04:27.580 real 0m0.163s 00:04:27.580 user 0m0.103s 00:04:27.580 sys 0m0.022s 00:04:27.580 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.580 07:29:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.580 07:29:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.580 07:29:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.580 07:29:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.580 07:29:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.580 ************************************ 00:04:27.580 START TEST rpc_trace_cmd_test 00:04:27.580 ************************************ 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.580 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58879", 00:04:27.580 "tpoint_group_mask": "0x8", 00:04:27.580 "iscsi_conn": { 00:04:27.580 "mask": "0x2", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "scsi": { 00:04:27.580 "mask": "0x4", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "bdev": { 00:04:27.580 "mask": "0x8", 00:04:27.580 "tpoint_mask": "0xffffffffffffffff" 00:04:27.580 }, 00:04:27.580 "nvmf_rdma": { 00:04:27.580 "mask": "0x10", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "nvmf_tcp": { 00:04:27.580 "mask": "0x20", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "ftl": { 00:04:27.580 "mask": "0x40", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "blobfs": { 00:04:27.580 "mask": "0x80", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "dsa": { 00:04:27.580 "mask": "0x200", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "thread": { 00:04:27.580 "mask": "0x400", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "nvme_pcie": { 00:04:27.580 "mask": "0x800", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "iaa": { 00:04:27.580 "mask": "0x1000", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "nvme_tcp": { 00:04:27.580 "mask": "0x2000", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "bdev_nvme": { 00:04:27.580 "mask": "0x4000", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 }, 00:04:27.580 "sock": { 00:04:27.580 "mask": "0x8000", 00:04:27.580 "tpoint_mask": "0x0" 00:04:27.580 } 00:04:27.580 }' 00:04:27.580 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.838 ************************************ 00:04:27.838 END TEST rpc_trace_cmd_test 00:04:27.838 ************************************ 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.838 00:04:27.838 real 0m0.281s 00:04:27.838 user 0m0.241s 00:04:27.838 sys 0m0.032s 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.838 07:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 07:29:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.097 07:29:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.097 07:29:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.097 07:29:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.097 07:29:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.097 07:29:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 ************************************ 00:04:28.097 START TEST rpc_daemon_integrity 00:04:28.097 ************************************ 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.097 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.097 { 00:04:28.097 "name": "Malloc2", 00:04:28.097 "aliases": [ 00:04:28.097 "8455f7e0-a866-43cf-a273-84afc556eb9c" 00:04:28.097 ], 00:04:28.097 "product_name": "Malloc disk", 00:04:28.097 "block_size": 512, 00:04:28.097 "num_blocks": 16384, 00:04:28.097 "uuid": "8455f7e0-a866-43cf-a273-84afc556eb9c", 00:04:28.098 "assigned_rate_limits": { 00:04:28.098 "rw_ios_per_sec": 0, 00:04:28.098 "rw_mbytes_per_sec": 0, 00:04:28.098 "r_mbytes_per_sec": 0, 00:04:28.098 "w_mbytes_per_sec": 0 00:04:28.098 }, 00:04:28.098 "claimed": false, 00:04:28.098 "zoned": false, 00:04:28.098 "supported_io_types": { 00:04:28.098 "read": true, 00:04:28.098 "write": true, 00:04:28.098 "unmap": true, 00:04:28.098 "flush": true, 00:04:28.098 "reset": true, 00:04:28.098 "nvme_admin": false, 00:04:28.098 "nvme_io": false, 00:04:28.098 "nvme_io_md": false, 00:04:28.098 "write_zeroes": true, 00:04:28.098 "zcopy": true, 00:04:28.098 "get_zone_info": false, 00:04:28.098 "zone_management": false, 00:04:28.098 "zone_append": false, 00:04:28.098 "compare": false, 00:04:28.098 "compare_and_write": false, 00:04:28.098 "abort": true, 00:04:28.098 "seek_hole": false, 00:04:28.098 "seek_data": false, 00:04:28.098 "copy": true, 00:04:28.098 "nvme_iov_md": false 00:04:28.098 }, 00:04:28.098 "memory_domains": [ 00:04:28.098 { 00:04:28.098 "dma_device_id": "system", 00:04:28.098 "dma_device_type": 1 00:04:28.098 }, 00:04:28.098 { 00:04:28.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.098 "dma_device_type": 2 00:04:28.098 } 00:04:28.098 ], 00:04:28.098 "driver_specific": {} 00:04:28.098 } 00:04:28.098 ]' 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.098 [2024-07-26 07:29:53.628596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.098 [2024-07-26 07:29:53.628650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.098 [2024-07-26 07:29:53.628676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xacdbe0 00:04:28.098 [2024-07-26 07:29:53.628686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.098 [2024-07-26 07:29:53.630221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.098 [2024-07-26 07:29:53.630258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.098 Passthru0 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.098 { 00:04:28.098 "name": "Malloc2", 00:04:28.098 "aliases": [ 00:04:28.098 "8455f7e0-a866-43cf-a273-84afc556eb9c" 00:04:28.098 ], 00:04:28.098 "product_name": "Malloc disk", 00:04:28.098 "block_size": 512, 00:04:28.098 "num_blocks": 16384, 00:04:28.098 "uuid": "8455f7e0-a866-43cf-a273-84afc556eb9c", 00:04:28.098 "assigned_rate_limits": { 00:04:28.098 "rw_ios_per_sec": 0, 00:04:28.098 "rw_mbytes_per_sec": 0, 00:04:28.098 "r_mbytes_per_sec": 0, 00:04:28.098 "w_mbytes_per_sec": 0 00:04:28.098 }, 00:04:28.098 "claimed": true, 00:04:28.098 "claim_type": "exclusive_write", 00:04:28.098 "zoned": false, 00:04:28.098 "supported_io_types": { 00:04:28.098 "read": true, 00:04:28.098 "write": true, 00:04:28.098 "unmap": true, 00:04:28.098 "flush": true, 00:04:28.098 "reset": true, 00:04:28.098 "nvme_admin": false, 00:04:28.098 "nvme_io": false, 00:04:28.098 "nvme_io_md": false, 00:04:28.098 "write_zeroes": true, 00:04:28.098 "zcopy": true, 00:04:28.098 "get_zone_info": false, 00:04:28.098 "zone_management": false, 00:04:28.098 "zone_append": false, 00:04:28.098 "compare": false, 00:04:28.098 "compare_and_write": false, 00:04:28.098 "abort": true, 00:04:28.098 "seek_hole": false, 00:04:28.098 "seek_data": false, 00:04:28.098 "copy": true, 00:04:28.098 "nvme_iov_md": false 00:04:28.098 }, 00:04:28.098 "memory_domains": [ 00:04:28.098 { 00:04:28.098 "dma_device_id": "system", 00:04:28.098 "dma_device_type": 1 00:04:28.098 }, 00:04:28.098 { 00:04:28.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.098 "dma_device_type": 2 00:04:28.098 } 00:04:28.098 ], 00:04:28.098 "driver_specific": {} 00:04:28.098 }, 00:04:28.098 { 00:04:28.098 "name": "Passthru0", 00:04:28.098 "aliases": [ 00:04:28.098 "aef3d54d-b001-570a-b697-9f8cc5025b8a" 00:04:28.098 ], 00:04:28.098 "product_name": "passthru", 00:04:28.098 "block_size": 512, 00:04:28.098 "num_blocks": 16384, 00:04:28.098 "uuid": "aef3d54d-b001-570a-b697-9f8cc5025b8a", 00:04:28.098 "assigned_rate_limits": { 00:04:28.098 "rw_ios_per_sec": 0, 00:04:28.098 "rw_mbytes_per_sec": 0, 00:04:28.098 "r_mbytes_per_sec": 0, 00:04:28.098 "w_mbytes_per_sec": 0 00:04:28.098 }, 00:04:28.098 "claimed": false, 00:04:28.098 "zoned": false, 00:04:28.098 "supported_io_types": { 00:04:28.098 "read": true, 00:04:28.098 "write": true, 00:04:28.098 "unmap": true, 00:04:28.098 "flush": true, 00:04:28.098 "reset": true, 00:04:28.098 "nvme_admin": false, 00:04:28.098 "nvme_io": false, 00:04:28.098 "nvme_io_md": false, 00:04:28.098 "write_zeroes": true, 00:04:28.098 "zcopy": true, 00:04:28.098 "get_zone_info": false, 00:04:28.098 "zone_management": false, 00:04:28.098 "zone_append": false, 00:04:28.098 "compare": false, 00:04:28.098 "compare_and_write": false, 00:04:28.098 "abort": true, 00:04:28.098 "seek_hole": false, 00:04:28.098 "seek_data": false, 00:04:28.098 "copy": true, 00:04:28.098 "nvme_iov_md": false 00:04:28.098 }, 00:04:28.098 "memory_domains": [ 00:04:28.098 { 00:04:28.098 "dma_device_id": "system", 00:04:28.098 "dma_device_type": 1 00:04:28.098 }, 00:04:28.098 { 00:04:28.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.098 "dma_device_type": 2 00:04:28.098 } 00:04:28.098 ], 00:04:28.098 "driver_specific": { 00:04:28.098 "passthru": { 00:04:28.098 "name": "Passthru0", 00:04:28.098 "base_bdev_name": "Malloc2" 00:04:28.098 } 00:04:28.098 } 00:04:28.098 } 00:04:28.098 ]' 00:04:28.098 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.357 ************************************ 00:04:28.357 END TEST rpc_daemon_integrity 00:04:28.357 ************************************ 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.357 00:04:28.357 real 0m0.329s 00:04:28.357 user 0m0.223s 00:04:28.357 sys 0m0.035s 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.357 07:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.357 07:29:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.357 07:29:53 rpc -- rpc/rpc.sh@84 -- # killprocess 58879 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 58879 ']' 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@954 -- # kill -0 58879 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@955 -- # uname 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58879 00:04:28.357 killing process with pid 58879 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58879' 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@969 -- # kill 58879 00:04:28.357 07:29:53 rpc -- common/autotest_common.sh@974 -- # wait 58879 00:04:28.923 00:04:28.923 real 0m3.008s 00:04:28.923 user 0m3.747s 00:04:28.923 sys 0m0.786s 00:04:28.923 ************************************ 00:04:28.923 END TEST rpc 00:04:28.923 ************************************ 00:04:28.923 07:29:54 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.923 07:29:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.924 07:29:54 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:28.924 07:29:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.924 07:29:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.924 07:29:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.924 ************************************ 00:04:28.924 START TEST skip_rpc 00:04:28.924 ************************************ 00:04:28.924 07:29:54 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:29.181 * Looking for test storage... 00:04:29.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.181 07:29:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.181 07:29:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.181 07:29:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.181 07:29:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.181 07:29:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.181 07:29:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.181 ************************************ 00:04:29.181 START TEST skip_rpc 00:04:29.181 ************************************ 00:04:29.182 07:29:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:29.182 07:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59077 00:04:29.182 07:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.182 07:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.182 07:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.182 [2024-07-26 07:29:54.641746] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:29.182 [2024-07-26 07:29:54.642265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:04:29.182 [2024-07-26 07:29:54.778502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.439 [2024-07-26 07:29:54.915690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.439 [2024-07-26 07:29:54.992859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59077 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 59077 ']' 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 59077 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59077 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59077' 00:04:34.733 killing process with pid 59077 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 59077 00:04:34.733 07:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 59077 00:04:34.733 ************************************ 00:04:34.733 END TEST skip_rpc 00:04:34.733 ************************************ 00:04:34.733 00:04:34.733 real 0m5.632s 00:04:34.733 user 0m5.153s 00:04:34.733 sys 0m0.380s 00:04:34.733 07:30:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.733 07:30:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.733 07:30:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.733 07:30:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.733 07:30:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.733 07:30:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.733 ************************************ 00:04:34.733 START TEST skip_rpc_with_json 00:04:34.733 ************************************ 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59169 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59169 00:04:34.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59169 ']' 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.733 07:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.733 [2024-07-26 07:30:00.326756] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:34.733 [2024-07-26 07:30:00.327060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:04:34.992 [2024-07-26 07:30:00.466270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.992 [2024-07-26 07:30:00.578998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.250 [2024-07-26 07:30:00.658207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.815 [2024-07-26 07:30:01.315432] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:35.815 request: 00:04:35.815 { 00:04:35.815 "trtype": "tcp", 00:04:35.815 "method": "nvmf_get_transports", 00:04:35.815 "req_id": 1 00:04:35.815 } 00:04:35.815 Got JSON-RPC error response 00:04:35.815 response: 00:04:35.815 { 00:04:35.815 "code": -19, 00:04:35.815 "message": "No such device" 00:04:35.815 } 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.815 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.815 [2024-07-26 07:30:01.327588] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.816 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.816 07:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:35.816 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.816 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.074 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.074 07:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.074 { 00:04:36.074 "subsystems": [ 00:04:36.074 { 00:04:36.074 "subsystem": "keyring", 00:04:36.074 "config": [] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "iobuf", 00:04:36.074 "config": [ 00:04:36.074 { 00:04:36.074 "method": "iobuf_set_options", 00:04:36.074 "params": { 00:04:36.074 "small_pool_count": 8192, 00:04:36.074 "large_pool_count": 1024, 00:04:36.074 "small_bufsize": 8192, 00:04:36.074 "large_bufsize": 135168 00:04:36.074 } 00:04:36.074 } 00:04:36.074 ] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "sock", 00:04:36.074 "config": [ 00:04:36.074 { 00:04:36.074 "method": "sock_set_default_impl", 00:04:36.074 "params": { 00:04:36.074 "impl_name": "uring" 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "sock_impl_set_options", 00:04:36.074 "params": { 00:04:36.074 "impl_name": "ssl", 00:04:36.074 "recv_buf_size": 4096, 00:04:36.074 "send_buf_size": 4096, 00:04:36.074 "enable_recv_pipe": true, 00:04:36.074 "enable_quickack": false, 00:04:36.074 "enable_placement_id": 0, 00:04:36.074 "enable_zerocopy_send_server": true, 00:04:36.074 "enable_zerocopy_send_client": false, 00:04:36.074 "zerocopy_threshold": 0, 00:04:36.074 "tls_version": 0, 00:04:36.074 "enable_ktls": false 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "sock_impl_set_options", 00:04:36.074 "params": { 00:04:36.074 "impl_name": "posix", 00:04:36.074 "recv_buf_size": 2097152, 00:04:36.074 "send_buf_size": 2097152, 00:04:36.074 "enable_recv_pipe": true, 00:04:36.074 "enable_quickack": false, 00:04:36.074 "enable_placement_id": 0, 00:04:36.074 "enable_zerocopy_send_server": true, 00:04:36.074 "enable_zerocopy_send_client": false, 00:04:36.074 "zerocopy_threshold": 0, 00:04:36.074 "tls_version": 0, 00:04:36.074 "enable_ktls": false 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "sock_impl_set_options", 00:04:36.074 "params": { 00:04:36.074 "impl_name": "uring", 00:04:36.074 "recv_buf_size": 2097152, 00:04:36.074 "send_buf_size": 2097152, 00:04:36.074 "enable_recv_pipe": true, 00:04:36.074 "enable_quickack": false, 00:04:36.074 "enable_placement_id": 0, 00:04:36.074 "enable_zerocopy_send_server": false, 00:04:36.074 "enable_zerocopy_send_client": false, 00:04:36.074 "zerocopy_threshold": 0, 00:04:36.074 "tls_version": 0, 00:04:36.074 "enable_ktls": false 00:04:36.074 } 00:04:36.074 } 00:04:36.074 ] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "vmd", 00:04:36.074 "config": [] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "accel", 00:04:36.074 "config": [ 00:04:36.074 { 00:04:36.074 "method": "accel_set_options", 00:04:36.074 "params": { 00:04:36.074 "small_cache_size": 128, 00:04:36.074 "large_cache_size": 16, 00:04:36.074 "task_count": 2048, 00:04:36.074 "sequence_count": 2048, 00:04:36.074 "buf_count": 2048 00:04:36.074 } 00:04:36.074 } 00:04:36.074 ] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "bdev", 00:04:36.074 "config": [ 00:04:36.074 { 00:04:36.074 "method": "bdev_set_options", 00:04:36.074 "params": { 00:04:36.074 "bdev_io_pool_size": 65535, 00:04:36.074 "bdev_io_cache_size": 256, 00:04:36.074 "bdev_auto_examine": true, 00:04:36.074 "iobuf_small_cache_size": 128, 00:04:36.074 "iobuf_large_cache_size": 16 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "bdev_raid_set_options", 00:04:36.074 "params": { 00:04:36.074 "process_window_size_kb": 1024, 00:04:36.074 "process_max_bandwidth_mb_sec": 0 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "bdev_iscsi_set_options", 00:04:36.074 "params": { 00:04:36.074 "timeout_sec": 30 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "bdev_nvme_set_options", 00:04:36.074 "params": { 00:04:36.074 "action_on_timeout": "none", 00:04:36.074 "timeout_us": 0, 00:04:36.074 "timeout_admin_us": 0, 00:04:36.074 "keep_alive_timeout_ms": 10000, 00:04:36.074 "arbitration_burst": 0, 00:04:36.074 "low_priority_weight": 0, 00:04:36.074 "medium_priority_weight": 0, 00:04:36.074 "high_priority_weight": 0, 00:04:36.074 "nvme_adminq_poll_period_us": 10000, 00:04:36.074 "nvme_ioq_poll_period_us": 0, 00:04:36.074 "io_queue_requests": 0, 00:04:36.074 "delay_cmd_submit": true, 00:04:36.074 "transport_retry_count": 4, 00:04:36.074 "bdev_retry_count": 3, 00:04:36.074 "transport_ack_timeout": 0, 00:04:36.074 "ctrlr_loss_timeout_sec": 0, 00:04:36.074 "reconnect_delay_sec": 0, 00:04:36.074 "fast_io_fail_timeout_sec": 0, 00:04:36.074 "disable_auto_failback": false, 00:04:36.074 "generate_uuids": false, 00:04:36.074 "transport_tos": 0, 00:04:36.074 "nvme_error_stat": false, 00:04:36.074 "rdma_srq_size": 0, 00:04:36.074 "io_path_stat": false, 00:04:36.074 "allow_accel_sequence": false, 00:04:36.074 "rdma_max_cq_size": 0, 00:04:36.074 "rdma_cm_event_timeout_ms": 0, 00:04:36.074 "dhchap_digests": [ 00:04:36.074 "sha256", 00:04:36.074 "sha384", 00:04:36.074 "sha512" 00:04:36.074 ], 00:04:36.074 "dhchap_dhgroups": [ 00:04:36.074 "null", 00:04:36.074 "ffdhe2048", 00:04:36.074 "ffdhe3072", 00:04:36.074 "ffdhe4096", 00:04:36.074 "ffdhe6144", 00:04:36.074 "ffdhe8192" 00:04:36.074 ] 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "bdev_nvme_set_hotplug", 00:04:36.074 "params": { 00:04:36.074 "period_us": 100000, 00:04:36.074 "enable": false 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "bdev_wait_for_examine" 00:04:36.074 } 00:04:36.074 ] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "scsi", 00:04:36.074 "config": null 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "scheduler", 00:04:36.074 "config": [ 00:04:36.074 { 00:04:36.074 "method": "framework_set_scheduler", 00:04:36.074 "params": { 00:04:36.074 "name": "static" 00:04:36.074 } 00:04:36.074 } 00:04:36.074 ] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "vhost_scsi", 00:04:36.074 "config": [] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "vhost_blk", 00:04:36.074 "config": [] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "ublk", 00:04:36.074 "config": [] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "nbd", 00:04:36.074 "config": [] 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "subsystem": "nvmf", 00:04:36.074 "config": [ 00:04:36.074 { 00:04:36.074 "method": "nvmf_set_config", 00:04:36.074 "params": { 00:04:36.074 "discovery_filter": "match_any", 00:04:36.074 "admin_cmd_passthru": { 00:04:36.074 "identify_ctrlr": false 00:04:36.074 } 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "nvmf_set_max_subsystems", 00:04:36.074 "params": { 00:04:36.074 "max_subsystems": 1024 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "nvmf_set_crdt", 00:04:36.074 "params": { 00:04:36.074 "crdt1": 0, 00:04:36.074 "crdt2": 0, 00:04:36.074 "crdt3": 0 00:04:36.074 } 00:04:36.074 }, 00:04:36.074 { 00:04:36.074 "method": "nvmf_create_transport", 00:04:36.074 "params": { 00:04:36.074 "trtype": "TCP", 00:04:36.074 "max_queue_depth": 128, 00:04:36.074 "max_io_qpairs_per_ctrlr": 127, 00:04:36.074 "in_capsule_data_size": 4096, 00:04:36.074 "max_io_size": 131072, 00:04:36.074 "io_unit_size": 131072, 00:04:36.074 "max_aq_depth": 128, 00:04:36.074 "num_shared_buffers": 511, 00:04:36.074 "buf_cache_size": 4294967295, 00:04:36.074 "dif_insert_or_strip": false, 00:04:36.074 "zcopy": false, 00:04:36.074 "c2h_success": true, 00:04:36.074 "sock_priority": 0, 00:04:36.074 "abort_timeout_sec": 1, 00:04:36.074 "ack_timeout": 0, 00:04:36.075 "data_wr_pool_size": 0 00:04:36.075 } 00:04:36.075 } 00:04:36.075 ] 00:04:36.075 }, 00:04:36.075 { 00:04:36.075 "subsystem": "iscsi", 00:04:36.075 "config": [ 00:04:36.075 { 00:04:36.075 "method": "iscsi_set_options", 00:04:36.075 "params": { 00:04:36.075 "node_base": "iqn.2016-06.io.spdk", 00:04:36.075 "max_sessions": 128, 00:04:36.075 "max_connections_per_session": 2, 00:04:36.075 "max_queue_depth": 64, 00:04:36.075 "default_time2wait": 2, 00:04:36.075 "default_time2retain": 20, 00:04:36.075 "first_burst_length": 8192, 00:04:36.075 "immediate_data": true, 00:04:36.075 "allow_duplicated_isid": false, 00:04:36.075 "error_recovery_level": 0, 00:04:36.075 "nop_timeout": 60, 00:04:36.075 "nop_in_interval": 30, 00:04:36.075 "disable_chap": false, 00:04:36.075 "require_chap": false, 00:04:36.075 "mutual_chap": false, 00:04:36.075 "chap_group": 0, 00:04:36.075 "max_large_datain_per_connection": 64, 00:04:36.075 "max_r2t_per_connection": 4, 00:04:36.075 "pdu_pool_size": 36864, 00:04:36.075 "immediate_data_pool_size": 16384, 00:04:36.075 "data_out_pool_size": 2048 00:04:36.075 } 00:04:36.075 } 00:04:36.075 ] 00:04:36.075 } 00:04:36.075 ] 00:04:36.075 } 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59169 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59169 ']' 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59169 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59169 00:04:36.075 killing process with pid 59169 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59169' 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59169 00:04:36.075 07:30:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59169 00:04:36.641 07:30:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59196 00:04:36.641 07:30:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.641 07:30:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59196 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59196 ']' 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59196 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59196 00:04:41.906 killing process with pid 59196 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59196' 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59196 00:04:41.906 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59196 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.164 ************************************ 00:04:42.164 END TEST skip_rpc_with_json 00:04:42.164 ************************************ 00:04:42.164 00:04:42.164 real 0m7.439s 00:04:42.164 user 0m6.998s 00:04:42.164 sys 0m0.835s 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.164 07:30:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.164 07:30:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.164 07:30:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.164 07:30:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.164 ************************************ 00:04:42.164 START TEST skip_rpc_with_delay 00:04:42.164 ************************************ 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.164 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.165 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.165 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.165 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.423 [2024-07-26 07:30:07.820761] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.423 [2024-07-26 07:30:07.820919] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:42.423 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:42.423 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.423 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.423 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.423 00:04:42.423 real 0m0.090s 00:04:42.423 user 0m0.052s 00:04:42.423 sys 0m0.036s 00:04:42.423 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.423 07:30:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.423 ************************************ 00:04:42.423 END TEST skip_rpc_with_delay 00:04:42.423 ************************************ 00:04:42.423 07:30:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.423 07:30:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.423 07:30:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.423 07:30:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.423 07:30:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.423 07:30:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.423 ************************************ 00:04:42.423 START TEST exit_on_failed_rpc_init 00:04:42.423 ************************************ 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59306 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59306 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59306 ']' 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.423 07:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.423 [2024-07-26 07:30:07.964286] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:42.423 [2024-07-26 07:30:07.964395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:04:42.682 [2024-07-26 07:30:08.098869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.682 [2024-07-26 07:30:08.227254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.940 [2024-07-26 07:30:08.304549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:43.507 07:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.507 [2024-07-26 07:30:08.978811] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:43.507 [2024-07-26 07:30:08.979093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59324 ] 00:04:43.765 [2024-07-26 07:30:09.120900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.765 [2024-07-26 07:30:09.231056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.765 [2024-07-26 07:30:09.231190] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.765 [2024-07-26 07:30:09.231208] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.765 [2024-07-26 07:30:09.231220] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59306 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59306 ']' 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59306 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.765 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59306 00:04:44.024 killing process with pid 59306 00:04:44.024 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.024 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.024 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59306' 00:04:44.024 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59306 00:04:44.024 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59306 00:04:44.616 00:04:44.616 real 0m2.042s 00:04:44.616 user 0m2.250s 00:04:44.616 sys 0m0.527s 00:04:44.616 ************************************ 00:04:44.616 END TEST exit_on_failed_rpc_init 00:04:44.616 ************************************ 00:04:44.616 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.616 07:30:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 07:30:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.616 ************************************ 00:04:44.616 END TEST skip_rpc 00:04:44.616 ************************************ 00:04:44.616 00:04:44.616 real 0m15.502s 00:04:44.616 user 0m14.558s 00:04:44.616 sys 0m1.962s 00:04:44.616 07:30:09 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.616 07:30:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 07:30:10 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.616 07:30:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.616 07:30:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.616 07:30:10 -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 ************************************ 00:04:44.616 START TEST rpc_client 00:04:44.616 ************************************ 00:04:44.616 07:30:10 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.616 * Looking for test storage... 00:04:44.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:44.616 07:30:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:44.616 OK 00:04:44.616 07:30:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.616 00:04:44.616 real 0m0.106s 00:04:44.616 user 0m0.053s 00:04:44.616 sys 0m0.057s 00:04:44.616 07:30:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.616 07:30:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 ************************************ 00:04:44.616 END TEST rpc_client 00:04:44.616 ************************************ 00:04:44.616 07:30:10 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.616 07:30:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.616 07:30:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.616 07:30:10 -- common/autotest_common.sh@10 -- # set +x 00:04:44.616 ************************************ 00:04:44.616 START TEST json_config 00:04:44.617 ************************************ 00:04:44.617 07:30:10 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.876 07:30:10 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.876 07:30:10 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.876 07:30:10 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.876 07:30:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.876 07:30:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.876 07:30:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.876 07:30:10 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.876 07:30:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@47 -- # : 0 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:44.876 07:30:10 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:44.876 INFO: JSON configuration test init 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:44.876 07:30:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.876 07:30:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.876 07:30:10 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:44.876 07:30:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.876 07:30:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.877 07:30:10 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:44.877 Waiting for target to run... 00:04:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.877 07:30:10 json_config -- json_config/common.sh@9 -- # local app=target 00:04:44.877 07:30:10 json_config -- json_config/common.sh@10 -- # shift 00:04:44.877 07:30:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.877 07:30:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.877 07:30:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.877 07:30:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.877 07:30:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.877 07:30:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59443 00:04:44.877 07:30:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.877 07:30:10 json_config -- json_config/common.sh@25 -- # waitforlisten 59443 /var/tmp/spdk_tgt.sock 00:04:44.877 07:30:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 59443 ']' 00:04:44.877 07:30:10 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:44.877 07:30:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.877 07:30:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.877 07:30:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.877 07:30:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.877 07:30:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.877 [2024-07-26 07:30:10.345714] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:44.877 [2024-07-26 07:30:10.345796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:04:45.442 [2024-07-26 07:30:10.874155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.442 [2024-07-26 07:30:10.976855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.007 00:04:46.007 07:30:11 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.007 07:30:11 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:46.007 07:30:11 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.008 07:30:11 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:46.008 07:30:11 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:46.008 07:30:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.008 07:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.008 07:30:11 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:46.008 07:30:11 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:46.008 07:30:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.008 07:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.008 07:30:11 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.008 07:30:11 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:46.008 07:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.008 [2024-07-26 07:30:11.601054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.265 07:30:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.265 07:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:46.265 07:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.265 07:30:11 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@51 -- # sort 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:46.523 07:30:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.523 07:30:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:46.523 07:30:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.523 07:30:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:46.523 07:30:12 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.523 07:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.780 MallocForNvmf0 00:04:46.780 07:30:12 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.780 07:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.038 MallocForNvmf1 00:04:47.038 07:30:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.038 07:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.296 [2024-07-26 07:30:12.833600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.296 07:30:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.296 07:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.554 07:30:13 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.554 07:30:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.811 07:30:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.811 07:30:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.069 07:30:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.069 07:30:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.326 [2024-07-26 07:30:13.714026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.326 07:30:13 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:48.326 07:30:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.326 07:30:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.326 07:30:13 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:48.326 07:30:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.326 07:30:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.326 07:30:13 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:48.326 07:30:13 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.326 07:30:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.598 MallocBdevForConfigChangeCheck 00:04:48.598 07:30:14 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:48.598 07:30:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.598 07:30:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.598 07:30:14 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:48.598 07:30:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.163 INFO: shutting down applications... 00:04:49.163 07:30:14 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:49.163 07:30:14 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:49.163 07:30:14 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:49.163 07:30:14 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:49.163 07:30:14 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:49.421 Calling clear_iscsi_subsystem 00:04:49.421 Calling clear_nvmf_subsystem 00:04:49.421 Calling clear_nbd_subsystem 00:04:49.421 Calling clear_ublk_subsystem 00:04:49.421 Calling clear_vhost_blk_subsystem 00:04:49.421 Calling clear_vhost_scsi_subsystem 00:04:49.421 Calling clear_bdev_subsystem 00:04:49.421 07:30:14 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:49.421 07:30:14 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:49.421 07:30:14 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:49.421 07:30:14 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.421 07:30:14 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:49.421 07:30:14 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:49.679 07:30:15 json_config -- json_config/json_config.sh@349 -- # break 00:04:49.679 07:30:15 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:49.679 07:30:15 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:49.679 07:30:15 json_config -- json_config/common.sh@31 -- # local app=target 00:04:49.679 07:30:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.679 07:30:15 json_config -- json_config/common.sh@35 -- # [[ -n 59443 ]] 00:04:49.679 07:30:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59443 00:04:49.680 07:30:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.680 07:30:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.680 07:30:15 json_config -- json_config/common.sh@41 -- # kill -0 59443 00:04:49.680 07:30:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.246 07:30:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.246 07:30:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.246 07:30:15 json_config -- json_config/common.sh@41 -- # kill -0 59443 00:04:50.246 SPDK target shutdown done 00:04:50.246 INFO: relaunching applications... 00:04:50.246 07:30:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.246 07:30:15 json_config -- json_config/common.sh@43 -- # break 00:04:50.246 07:30:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.246 07:30:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.246 07:30:15 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:50.246 07:30:15 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.246 07:30:15 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.246 07:30:15 json_config -- json_config/common.sh@10 -- # shift 00:04:50.246 07:30:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.246 07:30:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.246 07:30:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.246 07:30:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.246 07:30:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.246 07:30:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59643 00:04:50.246 07:30:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.246 07:30:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.246 Waiting for target to run... 00:04:50.246 07:30:15 json_config -- json_config/common.sh@25 -- # waitforlisten 59643 /var/tmp/spdk_tgt.sock 00:04:50.246 07:30:15 json_config -- common/autotest_common.sh@831 -- # '[' -z 59643 ']' 00:04:50.246 07:30:15 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.246 07:30:15 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.246 07:30:15 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.246 07:30:15 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.246 07:30:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.246 [2024-07-26 07:30:15.743690] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:50.246 [2024-07-26 07:30:15.743795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:04:50.813 [2024-07-26 07:30:16.258540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.813 [2024-07-26 07:30:16.363452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.071 [2024-07-26 07:30:16.489504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:51.330 [2024-07-26 07:30:16.704278] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.330 [2024-07-26 07:30:16.736352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:51.330 00:04:51.330 INFO: Checking if target configuration is the same... 00:04:51.330 07:30:16 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.330 07:30:16 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:51.330 07:30:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.330 07:30:16 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:51.330 07:30:16 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:51.330 07:30:16 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.330 07:30:16 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:51.330 07:30:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.330 + '[' 2 -ne 2 ']' 00:04:51.330 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:51.330 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:51.330 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:51.330 +++ basename /dev/fd/62 00:04:51.330 ++ mktemp /tmp/62.XXX 00:04:51.330 + tmp_file_1=/tmp/62.gsM 00:04:51.330 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.330 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:51.330 + tmp_file_2=/tmp/spdk_tgt_config.json.YFc 00:04:51.330 + ret=0 00:04:51.330 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:51.588 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:51.847 + diff -u /tmp/62.gsM /tmp/spdk_tgt_config.json.YFc 00:04:51.847 INFO: JSON config files are the same 00:04:51.847 + echo 'INFO: JSON config files are the same' 00:04:51.847 + rm /tmp/62.gsM /tmp/spdk_tgt_config.json.YFc 00:04:51.847 + exit 0 00:04:51.847 INFO: changing configuration and checking if this can be detected... 00:04:51.847 07:30:17 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:51.847 07:30:17 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:51.847 07:30:17 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:51.847 07:30:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:52.106 07:30:17 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.106 07:30:17 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:52.106 07:30:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.106 + '[' 2 -ne 2 ']' 00:04:52.106 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:52.106 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:52.106 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:52.106 +++ basename /dev/fd/62 00:04:52.106 ++ mktemp /tmp/62.XXX 00:04:52.106 + tmp_file_1=/tmp/62.nEG 00:04:52.106 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.106 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:52.106 + tmp_file_2=/tmp/spdk_tgt_config.json.9Lo 00:04:52.106 + ret=0 00:04:52.106 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:52.364 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:52.364 + diff -u /tmp/62.nEG /tmp/spdk_tgt_config.json.9Lo 00:04:52.364 + ret=1 00:04:52.364 + echo '=== Start of file: /tmp/62.nEG ===' 00:04:52.364 + cat /tmp/62.nEG 00:04:52.364 + echo '=== End of file: /tmp/62.nEG ===' 00:04:52.364 + echo '' 00:04:52.364 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9Lo ===' 00:04:52.364 + cat /tmp/spdk_tgt_config.json.9Lo 00:04:52.364 + echo '=== End of file: /tmp/spdk_tgt_config.json.9Lo ===' 00:04:52.364 + echo '' 00:04:52.364 + rm /tmp/62.nEG /tmp/spdk_tgt_config.json.9Lo 00:04:52.364 + exit 1 00:04:52.364 INFO: configuration change detected. 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:52.364 07:30:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.364 07:30:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@321 -- # [[ -n 59643 ]] 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:52.364 07:30:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.364 07:30:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.364 07:30:17 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:52.365 07:30:17 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:52.365 07:30:17 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:52.365 07:30:17 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:52.365 07:30:17 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:52.365 07:30:17 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:52.365 07:30:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.365 07:30:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.623 07:30:17 json_config -- json_config/json_config.sh@327 -- # killprocess 59643 00:04:52.623 07:30:17 json_config -- common/autotest_common.sh@950 -- # '[' -z 59643 ']' 00:04:52.623 07:30:17 json_config -- common/autotest_common.sh@954 -- # kill -0 59643 00:04:52.623 07:30:17 json_config -- common/autotest_common.sh@955 -- # uname 00:04:52.623 07:30:17 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.623 07:30:17 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59643 00:04:52.623 killing process with pid 59643 00:04:52.623 07:30:18 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.623 07:30:18 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.623 07:30:18 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59643' 00:04:52.623 07:30:18 json_config -- common/autotest_common.sh@969 -- # kill 59643 00:04:52.623 07:30:18 json_config -- common/autotest_common.sh@974 -- # wait 59643 00:04:52.882 07:30:18 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:52.882 07:30:18 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:52.882 07:30:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.882 07:30:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.882 INFO: Success 00:04:52.882 07:30:18 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:52.882 07:30:18 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:52.882 ************************************ 00:04:52.882 END TEST json_config 00:04:52.882 ************************************ 00:04:52.882 00:04:52.882 real 0m8.213s 00:04:52.882 user 0m11.375s 00:04:52.882 sys 0m1.987s 00:04:52.882 07:30:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.882 07:30:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.882 07:30:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:52.882 07:30:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.882 07:30:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.882 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.882 ************************************ 00:04:52.882 START TEST json_config_extra_key 00:04:52.882 ************************************ 00:04:52.882 07:30:18 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.141 07:30:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.141 07:30:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.141 07:30:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.141 07:30:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.141 07:30:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.141 07:30:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.141 07:30:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:53.141 07:30:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.141 07:30:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.141 INFO: launching applications... 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:53.141 07:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59784 00:04:53.141 Waiting for target to run... 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59784 /var/tmp/spdk_tgt.sock 00:04:53.141 07:30:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59784 ']' 00:04:53.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.141 07:30:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.141 07:30:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.141 07:30:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.141 07:30:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.141 07:30:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.141 07:30:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.141 [2024-07-26 07:30:18.608337] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:53.141 [2024-07-26 07:30:18.608442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:04:53.709 [2024-07-26 07:30:19.128560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.709 [2024-07-26 07:30:19.239758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.709 [2024-07-26 07:30:19.260296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.275 00:04:54.275 INFO: shutting down applications... 00:04:54.275 07:30:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.275 07:30:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:54.275 07:30:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:54.275 07:30:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59784 ]] 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59784 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59784 00:04:54.275 07:30:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.534 07:30:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.534 07:30:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.534 07:30:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59784 00:04:54.534 07:30:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59784 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.100 07:30:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.100 SPDK target shutdown done 00:04:55.100 Success 00:04:55.100 07:30:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:55.100 ************************************ 00:04:55.100 END TEST json_config_extra_key 00:04:55.100 ************************************ 00:04:55.100 00:04:55.100 real 0m2.144s 00:04:55.100 user 0m1.607s 00:04:55.100 sys 0m0.560s 00:04:55.100 07:30:20 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.100 07:30:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.100 07:30:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.100 07:30:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.100 07:30:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.100 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.100 ************************************ 00:04:55.100 START TEST alias_rpc 00:04:55.100 ************************************ 00:04:55.100 07:30:20 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.367 * Looking for test storage... 00:04:55.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:55.367 07:30:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.367 07:30:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59859 00:04:55.367 07:30:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59859 00:04:55.367 07:30:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.367 07:30:20 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59859 ']' 00:04:55.367 07:30:20 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.367 07:30:20 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.367 07:30:20 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.367 07:30:20 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.367 07:30:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.367 [2024-07-26 07:30:20.807050] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:55.367 [2024-07-26 07:30:20.807772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:04:55.367 [2024-07-26 07:30:20.946562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.634 [2024-07-26 07:30:21.079620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.634 [2024-07-26 07:30:21.152264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:56.201 07:30:21 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.201 07:30:21 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:56.201 07:30:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:56.461 07:30:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59859 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59859 ']' 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59859 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59859 00:04:56.461 killing process with pid 59859 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59859' 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@969 -- # kill 59859 00:04:56.461 07:30:22 alias_rpc -- common/autotest_common.sh@974 -- # wait 59859 00:04:57.028 00:04:57.028 real 0m1.912s 00:04:57.028 user 0m2.034s 00:04:57.028 sys 0m0.517s 00:04:57.028 07:30:22 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.028 07:30:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.028 ************************************ 00:04:57.028 END TEST alias_rpc 00:04:57.028 ************************************ 00:04:57.028 07:30:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:57.028 07:30:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:57.028 07:30:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.028 07:30:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.028 07:30:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.028 ************************************ 00:04:57.028 START TEST spdkcli_tcp 00:04:57.028 ************************************ 00:04:57.028 07:30:22 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:57.287 * Looking for test storage... 00:04:57.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59931 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59931 00:04:57.287 07:30:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59931 ']' 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.287 07:30:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 [2024-07-26 07:30:22.774972] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:57.287 [2024-07-26 07:30:22.775355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:04:57.544 [2024-07-26 07:30:22.914738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.544 [2024-07-26 07:30:23.034403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.544 [2024-07-26 07:30:23.034413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.544 [2024-07-26 07:30:23.108485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.111 07:30:23 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.111 07:30:23 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:58.111 07:30:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59948 00:04:58.111 07:30:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:58.111 07:30:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:58.370 [ 00:04:58.370 "bdev_malloc_delete", 00:04:58.370 "bdev_malloc_create", 00:04:58.370 "bdev_null_resize", 00:04:58.370 "bdev_null_delete", 00:04:58.370 "bdev_null_create", 00:04:58.370 "bdev_nvme_cuse_unregister", 00:04:58.370 "bdev_nvme_cuse_register", 00:04:58.370 "bdev_opal_new_user", 00:04:58.370 "bdev_opal_set_lock_state", 00:04:58.370 "bdev_opal_delete", 00:04:58.370 "bdev_opal_get_info", 00:04:58.370 "bdev_opal_create", 00:04:58.370 "bdev_nvme_opal_revert", 00:04:58.370 "bdev_nvme_opal_init", 00:04:58.370 "bdev_nvme_send_cmd", 00:04:58.370 "bdev_nvme_get_path_iostat", 00:04:58.370 "bdev_nvme_get_mdns_discovery_info", 00:04:58.370 "bdev_nvme_stop_mdns_discovery", 00:04:58.370 "bdev_nvme_start_mdns_discovery", 00:04:58.370 "bdev_nvme_set_multipath_policy", 00:04:58.370 "bdev_nvme_set_preferred_path", 00:04:58.370 "bdev_nvme_get_io_paths", 00:04:58.370 "bdev_nvme_remove_error_injection", 00:04:58.370 "bdev_nvme_add_error_injection", 00:04:58.370 "bdev_nvme_get_discovery_info", 00:04:58.370 "bdev_nvme_stop_discovery", 00:04:58.370 "bdev_nvme_start_discovery", 00:04:58.370 "bdev_nvme_get_controller_health_info", 00:04:58.370 "bdev_nvme_disable_controller", 00:04:58.370 "bdev_nvme_enable_controller", 00:04:58.370 "bdev_nvme_reset_controller", 00:04:58.370 "bdev_nvme_get_transport_statistics", 00:04:58.370 "bdev_nvme_apply_firmware", 00:04:58.370 "bdev_nvme_detach_controller", 00:04:58.370 "bdev_nvme_get_controllers", 00:04:58.370 "bdev_nvme_attach_controller", 00:04:58.370 "bdev_nvme_set_hotplug", 00:04:58.370 "bdev_nvme_set_options", 00:04:58.370 "bdev_passthru_delete", 00:04:58.370 "bdev_passthru_create", 00:04:58.370 "bdev_lvol_set_parent_bdev", 00:04:58.370 "bdev_lvol_set_parent", 00:04:58.370 "bdev_lvol_check_shallow_copy", 00:04:58.370 "bdev_lvol_start_shallow_copy", 00:04:58.370 "bdev_lvol_grow_lvstore", 00:04:58.370 "bdev_lvol_get_lvols", 00:04:58.370 "bdev_lvol_get_lvstores", 00:04:58.370 "bdev_lvol_delete", 00:04:58.370 "bdev_lvol_set_read_only", 00:04:58.370 "bdev_lvol_resize", 00:04:58.370 "bdev_lvol_decouple_parent", 00:04:58.370 "bdev_lvol_inflate", 00:04:58.370 "bdev_lvol_rename", 00:04:58.370 "bdev_lvol_clone_bdev", 00:04:58.370 "bdev_lvol_clone", 00:04:58.370 "bdev_lvol_snapshot", 00:04:58.370 "bdev_lvol_create", 00:04:58.370 "bdev_lvol_delete_lvstore", 00:04:58.370 "bdev_lvol_rename_lvstore", 00:04:58.370 "bdev_lvol_create_lvstore", 00:04:58.370 "bdev_raid_set_options", 00:04:58.370 "bdev_raid_remove_base_bdev", 00:04:58.370 "bdev_raid_add_base_bdev", 00:04:58.370 "bdev_raid_delete", 00:04:58.370 "bdev_raid_create", 00:04:58.370 "bdev_raid_get_bdevs", 00:04:58.370 "bdev_error_inject_error", 00:04:58.370 "bdev_error_delete", 00:04:58.370 "bdev_error_create", 00:04:58.370 "bdev_split_delete", 00:04:58.370 "bdev_split_create", 00:04:58.371 "bdev_delay_delete", 00:04:58.371 "bdev_delay_create", 00:04:58.371 "bdev_delay_update_latency", 00:04:58.371 "bdev_zone_block_delete", 00:04:58.371 "bdev_zone_block_create", 00:04:58.371 "blobfs_create", 00:04:58.371 "blobfs_detect", 00:04:58.371 "blobfs_set_cache_size", 00:04:58.371 "bdev_aio_delete", 00:04:58.371 "bdev_aio_rescan", 00:04:58.371 "bdev_aio_create", 00:04:58.371 "bdev_ftl_set_property", 00:04:58.371 "bdev_ftl_get_properties", 00:04:58.371 "bdev_ftl_get_stats", 00:04:58.371 "bdev_ftl_unmap", 00:04:58.371 "bdev_ftl_unload", 00:04:58.371 "bdev_ftl_delete", 00:04:58.371 "bdev_ftl_load", 00:04:58.371 "bdev_ftl_create", 00:04:58.371 "bdev_virtio_attach_controller", 00:04:58.371 "bdev_virtio_scsi_get_devices", 00:04:58.371 "bdev_virtio_detach_controller", 00:04:58.371 "bdev_virtio_blk_set_hotplug", 00:04:58.371 "bdev_iscsi_delete", 00:04:58.371 "bdev_iscsi_create", 00:04:58.371 "bdev_iscsi_set_options", 00:04:58.371 "bdev_uring_delete", 00:04:58.371 "bdev_uring_rescan", 00:04:58.371 "bdev_uring_create", 00:04:58.371 "accel_error_inject_error", 00:04:58.371 "ioat_scan_accel_module", 00:04:58.371 "dsa_scan_accel_module", 00:04:58.371 "iaa_scan_accel_module", 00:04:58.371 "keyring_file_remove_key", 00:04:58.371 "keyring_file_add_key", 00:04:58.371 "keyring_linux_set_options", 00:04:58.371 "iscsi_get_histogram", 00:04:58.371 "iscsi_enable_histogram", 00:04:58.371 "iscsi_set_options", 00:04:58.371 "iscsi_get_auth_groups", 00:04:58.371 "iscsi_auth_group_remove_secret", 00:04:58.371 "iscsi_auth_group_add_secret", 00:04:58.371 "iscsi_delete_auth_group", 00:04:58.371 "iscsi_create_auth_group", 00:04:58.371 "iscsi_set_discovery_auth", 00:04:58.371 "iscsi_get_options", 00:04:58.371 "iscsi_target_node_request_logout", 00:04:58.371 "iscsi_target_node_set_redirect", 00:04:58.371 "iscsi_target_node_set_auth", 00:04:58.371 "iscsi_target_node_add_lun", 00:04:58.371 "iscsi_get_stats", 00:04:58.371 "iscsi_get_connections", 00:04:58.371 "iscsi_portal_group_set_auth", 00:04:58.371 "iscsi_start_portal_group", 00:04:58.371 "iscsi_delete_portal_group", 00:04:58.371 "iscsi_create_portal_group", 00:04:58.371 "iscsi_get_portal_groups", 00:04:58.371 "iscsi_delete_target_node", 00:04:58.371 "iscsi_target_node_remove_pg_ig_maps", 00:04:58.371 "iscsi_target_node_add_pg_ig_maps", 00:04:58.371 "iscsi_create_target_node", 00:04:58.371 "iscsi_get_target_nodes", 00:04:58.371 "iscsi_delete_initiator_group", 00:04:58.371 "iscsi_initiator_group_remove_initiators", 00:04:58.371 "iscsi_initiator_group_add_initiators", 00:04:58.371 "iscsi_create_initiator_group", 00:04:58.371 "iscsi_get_initiator_groups", 00:04:58.371 "nvmf_set_crdt", 00:04:58.371 "nvmf_set_config", 00:04:58.371 "nvmf_set_max_subsystems", 00:04:58.371 "nvmf_stop_mdns_prr", 00:04:58.371 "nvmf_publish_mdns_prr", 00:04:58.371 "nvmf_subsystem_get_listeners", 00:04:58.371 "nvmf_subsystem_get_qpairs", 00:04:58.371 "nvmf_subsystem_get_controllers", 00:04:58.371 "nvmf_get_stats", 00:04:58.371 "nvmf_get_transports", 00:04:58.371 "nvmf_create_transport", 00:04:58.371 "nvmf_get_targets", 00:04:58.371 "nvmf_delete_target", 00:04:58.371 "nvmf_create_target", 00:04:58.371 "nvmf_subsystem_allow_any_host", 00:04:58.371 "nvmf_subsystem_remove_host", 00:04:58.371 "nvmf_subsystem_add_host", 00:04:58.371 "nvmf_ns_remove_host", 00:04:58.371 "nvmf_ns_add_host", 00:04:58.371 "nvmf_subsystem_remove_ns", 00:04:58.371 "nvmf_subsystem_add_ns", 00:04:58.371 "nvmf_subsystem_listener_set_ana_state", 00:04:58.371 "nvmf_discovery_get_referrals", 00:04:58.371 "nvmf_discovery_remove_referral", 00:04:58.371 "nvmf_discovery_add_referral", 00:04:58.371 "nvmf_subsystem_remove_listener", 00:04:58.371 "nvmf_subsystem_add_listener", 00:04:58.371 "nvmf_delete_subsystem", 00:04:58.371 "nvmf_create_subsystem", 00:04:58.371 "nvmf_get_subsystems", 00:04:58.371 "env_dpdk_get_mem_stats", 00:04:58.371 "nbd_get_disks", 00:04:58.371 "nbd_stop_disk", 00:04:58.371 "nbd_start_disk", 00:04:58.371 "ublk_recover_disk", 00:04:58.371 "ublk_get_disks", 00:04:58.371 "ublk_stop_disk", 00:04:58.371 "ublk_start_disk", 00:04:58.371 "ublk_destroy_target", 00:04:58.371 "ublk_create_target", 00:04:58.371 "virtio_blk_create_transport", 00:04:58.371 "virtio_blk_get_transports", 00:04:58.371 "vhost_controller_set_coalescing", 00:04:58.371 "vhost_get_controllers", 00:04:58.371 "vhost_delete_controller", 00:04:58.371 "vhost_create_blk_controller", 00:04:58.371 "vhost_scsi_controller_remove_target", 00:04:58.371 "vhost_scsi_controller_add_target", 00:04:58.371 "vhost_start_scsi_controller", 00:04:58.371 "vhost_create_scsi_controller", 00:04:58.371 "thread_set_cpumask", 00:04:58.371 "framework_get_governor", 00:04:58.371 "framework_get_scheduler", 00:04:58.371 "framework_set_scheduler", 00:04:58.371 "framework_get_reactors", 00:04:58.371 "thread_get_io_channels", 00:04:58.371 "thread_get_pollers", 00:04:58.371 "thread_get_stats", 00:04:58.371 "framework_monitor_context_switch", 00:04:58.371 "spdk_kill_instance", 00:04:58.371 "log_enable_timestamps", 00:04:58.371 "log_get_flags", 00:04:58.371 "log_clear_flag", 00:04:58.371 "log_set_flag", 00:04:58.371 "log_get_level", 00:04:58.371 "log_set_level", 00:04:58.371 "log_get_print_level", 00:04:58.371 "log_set_print_level", 00:04:58.371 "framework_enable_cpumask_locks", 00:04:58.371 "framework_disable_cpumask_locks", 00:04:58.371 "framework_wait_init", 00:04:58.371 "framework_start_init", 00:04:58.371 "scsi_get_devices", 00:04:58.371 "bdev_get_histogram", 00:04:58.371 "bdev_enable_histogram", 00:04:58.371 "bdev_set_qos_limit", 00:04:58.371 "bdev_set_qd_sampling_period", 00:04:58.371 "bdev_get_bdevs", 00:04:58.371 "bdev_reset_iostat", 00:04:58.371 "bdev_get_iostat", 00:04:58.371 "bdev_examine", 00:04:58.371 "bdev_wait_for_examine", 00:04:58.371 "bdev_set_options", 00:04:58.371 "notify_get_notifications", 00:04:58.371 "notify_get_types", 00:04:58.371 "accel_get_stats", 00:04:58.371 "accel_set_options", 00:04:58.371 "accel_set_driver", 00:04:58.371 "accel_crypto_key_destroy", 00:04:58.371 "accel_crypto_keys_get", 00:04:58.371 "accel_crypto_key_create", 00:04:58.371 "accel_assign_opc", 00:04:58.371 "accel_get_module_info", 00:04:58.371 "accel_get_opc_assignments", 00:04:58.371 "vmd_rescan", 00:04:58.371 "vmd_remove_device", 00:04:58.371 "vmd_enable", 00:04:58.371 "sock_get_default_impl", 00:04:58.371 "sock_set_default_impl", 00:04:58.371 "sock_impl_set_options", 00:04:58.371 "sock_impl_get_options", 00:04:58.371 "iobuf_get_stats", 00:04:58.371 "iobuf_set_options", 00:04:58.371 "framework_get_pci_devices", 00:04:58.371 "framework_get_config", 00:04:58.371 "framework_get_subsystems", 00:04:58.371 "trace_get_info", 00:04:58.371 "trace_get_tpoint_group_mask", 00:04:58.371 "trace_disable_tpoint_group", 00:04:58.371 "trace_enable_tpoint_group", 00:04:58.371 "trace_clear_tpoint_mask", 00:04:58.371 "trace_set_tpoint_mask", 00:04:58.371 "keyring_get_keys", 00:04:58.371 "spdk_get_version", 00:04:58.371 "rpc_get_methods" 00:04:58.371 ] 00:04:58.630 07:30:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:58.630 07:30:23 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.630 07:30:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.630 07:30:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:58.630 07:30:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59931 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59931 ']' 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59931 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59931 00:04:58.630 killing process with pid 59931 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59931' 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59931 00:04:58.630 07:30:24 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59931 00:04:59.199 00:04:59.199 real 0m1.970s 00:04:59.199 user 0m3.540s 00:04:59.199 sys 0m0.565s 00:04:59.199 ************************************ 00:04:59.199 END TEST spdkcli_tcp 00:04:59.199 ************************************ 00:04:59.199 07:30:24 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.199 07:30:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.199 07:30:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.199 07:30:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.199 07:30:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.199 07:30:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.199 ************************************ 00:04:59.199 START TEST dpdk_mem_utility 00:04:59.199 ************************************ 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.199 * Looking for test storage... 00:04:59.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:59.199 07:30:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.199 07:30:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60022 00:04:59.199 07:30:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60022 00:04:59.199 07:30:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 60022 ']' 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.199 07:30:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.199 [2024-07-26 07:30:24.786770] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:04:59.199 [2024-07-26 07:30:24.786887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 00:04:59.458 [2024-07-26 07:30:24.922032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.458 [2024-07-26 07:30:25.035654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.717 [2024-07-26 07:30:25.110360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.284 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.284 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:00.284 07:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.284 07:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.284 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.284 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.284 { 00:05:00.284 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.284 } 00:05:00.284 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.284 07:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.284 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:00.284 1 heaps totaling size 814.000000 MiB 00:05:00.284 size: 814.000000 MiB heap id: 0 00:05:00.284 end heaps---------- 00:05:00.284 8 mempools totaling size 598.116089 MiB 00:05:00.284 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.284 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.284 size: 84.521057 MiB name: bdev_io_60022 00:05:00.284 size: 51.011292 MiB name: evtpool_60022 00:05:00.284 size: 50.003479 MiB name: msgpool_60022 00:05:00.284 size: 21.763794 MiB name: PDU_Pool 00:05:00.284 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.284 size: 0.026123 MiB name: Session_Pool 00:05:00.284 end mempools------- 00:05:00.284 6 memzones totaling size 4.142822 MiB 00:05:00.284 size: 1.000366 MiB name: RG_ring_0_60022 00:05:00.284 size: 1.000366 MiB name: RG_ring_1_60022 00:05:00.284 size: 1.000366 MiB name: RG_ring_4_60022 00:05:00.284 size: 1.000366 MiB name: RG_ring_5_60022 00:05:00.284 size: 0.125366 MiB name: RG_ring_2_60022 00:05:00.284 size: 0.015991 MiB name: RG_ring_3_60022 00:05:00.284 end memzones------- 00:05:00.284 07:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.284 heap id: 0 total size: 814.000000 MiB number of busy elements: 287 number of free elements: 15 00:05:00.284 list of free elements. size: 12.474304 MiB 00:05:00.284 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:00.284 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:00.284 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:00.284 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:00.284 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:00.284 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:00.284 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:00.284 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:00.284 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:00.284 element at address: 0x20001aa00000 with size: 0.570251 MiB 00:05:00.284 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:00.284 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:00.284 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:00.284 element at address: 0x200027e00000 with size: 0.396301 MiB 00:05:00.284 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:00.285 list of standard malloc elements. size: 199.263123 MiB 00:05:00.285 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:00.285 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:00.285 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:00.285 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:00.285 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:00.285 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:00.285 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:00.285 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:00.285 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:00.285 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:00.285 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e65740 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e65800 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c400 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:00.285 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:00.286 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:00.286 list of memzone associated elements. size: 602.262573 MiB 00:05:00.286 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:00.286 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.286 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:00.286 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.286 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:00.286 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60022_0 00:05:00.286 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:00.286 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60022_0 00:05:00.286 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:00.286 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60022_0 00:05:00.286 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:00.286 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.286 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:00.286 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.286 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:00.286 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60022 00:05:00.286 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:00.286 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60022 00:05:00.286 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:00.286 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60022 00:05:00.286 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:00.286 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.286 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:00.286 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.286 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:00.286 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.286 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:00.286 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.286 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:00.286 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60022 00:05:00.286 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:00.286 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60022 00:05:00.286 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:00.286 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60022 00:05:00.286 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:00.286 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60022 00:05:00.286 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:00.286 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60022 00:05:00.286 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:00.286 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.286 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:00.286 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.286 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:00.286 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.286 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:00.286 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60022 00:05:00.286 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:00.286 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.286 element at address: 0x200027e658c0 with size: 0.023743 MiB 00:05:00.286 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.286 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:00.286 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60022 00:05:00.286 element at address: 0x200027e6ba00 with size: 0.002441 MiB 00:05:00.286 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.286 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:00.286 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60022 00:05:00.286 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:00.286 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60022 00:05:00.286 element at address: 0x200027e6c4c0 with size: 0.000305 MiB 00:05:00.286 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.544 07:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.544 07:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60022 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 60022 ']' 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 60022 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60022 00:05:00.544 killing process with pid 60022 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60022' 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 60022 00:05:00.544 07:30:25 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 60022 00:05:01.112 00:05:01.112 real 0m1.855s 00:05:01.112 user 0m1.917s 00:05:01.112 sys 0m0.494s 00:05:01.112 07:30:26 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.112 ************************************ 00:05:01.112 END TEST dpdk_mem_utility 00:05:01.112 07:30:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.112 ************************************ 00:05:01.112 07:30:26 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.112 07:30:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.112 07:30:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.112 07:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:01.112 ************************************ 00:05:01.112 START TEST event 00:05:01.112 ************************************ 00:05:01.112 07:30:26 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.112 * Looking for test storage... 00:05:01.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:01.112 07:30:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:01.112 07:30:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:01.112 07:30:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.112 07:30:26 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:01.112 07:30:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.112 07:30:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.112 ************************************ 00:05:01.112 START TEST event_perf 00:05:01.112 ************************************ 00:05:01.112 07:30:26 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.112 Running I/O for 1 seconds...[2024-07-26 07:30:26.660850] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:01.112 [2024-07-26 07:30:26.661083] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60098 ] 00:05:01.371 [2024-07-26 07:30:26.799987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.371 [2024-07-26 07:30:26.903224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.371 [2024-07-26 07:30:26.903373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.371 [2024-07-26 07:30:26.903534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.371 Running I/O for 1 seconds...[2024-07-26 07:30:26.903535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.746 00:05:02.746 lcore 0: 126858 00:05:02.746 lcore 1: 126859 00:05:02.746 lcore 2: 126861 00:05:02.746 lcore 3: 126863 00:05:02.746 done. 00:05:02.746 ************************************ 00:05:02.746 END TEST event_perf 00:05:02.746 ************************************ 00:05:02.746 00:05:02.746 real 0m1.378s 00:05:02.746 user 0m4.166s 00:05:02.746 sys 0m0.078s 00:05:02.746 07:30:28 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.746 07:30:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.746 07:30:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.746 07:30:28 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:02.746 07:30:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.746 07:30:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.746 ************************************ 00:05:02.746 START TEST event_reactor 00:05:02.746 ************************************ 00:05:02.746 07:30:28 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.746 [2024-07-26 07:30:28.080416] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:02.746 [2024-07-26 07:30:28.080533] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:05:02.746 [2024-07-26 07:30:28.215373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.746 [2024-07-26 07:30:28.337929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.119 test_start 00:05:04.119 oneshot 00:05:04.119 tick 100 00:05:04.119 tick 100 00:05:04.119 tick 250 00:05:04.119 tick 100 00:05:04.119 tick 100 00:05:04.119 tick 250 00:05:04.119 tick 100 00:05:04.119 tick 500 00:05:04.119 tick 100 00:05:04.119 tick 100 00:05:04.119 tick 250 00:05:04.119 tick 100 00:05:04.119 tick 100 00:05:04.119 test_end 00:05:04.119 ************************************ 00:05:04.119 END TEST event_reactor 00:05:04.119 ************************************ 00:05:04.119 00:05:04.119 real 0m1.394s 00:05:04.119 user 0m1.222s 00:05:04.119 sys 0m0.065s 00:05:04.119 07:30:29 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.119 07:30:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.119 07:30:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.119 07:30:29 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:04.119 07:30:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.119 07:30:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.119 ************************************ 00:05:04.119 START TEST event_reactor_perf 00:05:04.119 ************************************ 00:05:04.119 07:30:29 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.119 [2024-07-26 07:30:29.529073] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:04.119 [2024-07-26 07:30:29.529170] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60167 ] 00:05:04.119 [2024-07-26 07:30:29.667327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.377 [2024-07-26 07:30:29.792350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.312 test_start 00:05:05.312 test_end 00:05:05.312 Performance: 402598 events per second 00:05:05.312 ************************************ 00:05:05.312 END TEST event_reactor_perf 00:05:05.312 ************************************ 00:05:05.312 00:05:05.312 real 0m1.384s 00:05:05.312 user 0m1.207s 00:05:05.312 sys 0m0.071s 00:05:05.312 07:30:30 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.312 07:30:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.569 07:30:30 event -- event/event.sh@49 -- # uname -s 00:05:05.570 07:30:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.570 07:30:30 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.570 07:30:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.570 07:30:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.570 07:30:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.570 ************************************ 00:05:05.570 START TEST event_scheduler 00:05:05.570 ************************************ 00:05:05.570 07:30:30 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.570 * Looking for test storage... 00:05:05.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.570 07:30:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.570 07:30:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60229 00:05:05.570 07:30:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.570 07:30:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.570 07:30:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60229 00:05:05.570 07:30:31 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60229 ']' 00:05:05.570 07:30:31 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.570 07:30:31 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.570 07:30:31 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.570 07:30:31 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.570 07:30:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.570 [2024-07-26 07:30:31.094594] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:05.570 [2024-07-26 07:30:31.094943] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:05:05.827 [2024-07-26 07:30:31.234971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.827 [2024-07-26 07:30:31.382319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.827 [2024-07-26 07:30:31.382397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.827 [2024-07-26 07:30:31.382615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.827 [2024-07-26 07:30:31.383412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.760 07:30:32 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.760 07:30:32 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:06.760 07:30:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.760 07:30:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.760 07:30:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.760 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.760 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.760 POWER: Cannot set governor of lcore 0 to performance 00:05:06.761 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.761 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.761 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.761 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.761 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.761 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.761 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.761 [2024-07-26 07:30:32.059207] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.761 [2024-07-26 07:30:32.059423] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:06.761 [2024-07-26 07:30:32.059721] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.761 [2024-07-26 07:30:32.060000] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.761 [2024-07-26 07:30:32.060209] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.761 [2024-07-26 07:30:32.060377] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 [2024-07-26 07:30:32.141039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.761 [2024-07-26 07:30:32.187688] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 ************************************ 00:05:06.761 START TEST scheduler_create_thread 00:05:06.761 ************************************ 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 2 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 3 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 4 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 5 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 6 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 7 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 8 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 9 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 10 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.761 07:30:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.167 07:30:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.167 07:30:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.167 07:30:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.167 07:30:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.167 07:30:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 ************************************ 00:05:09.541 END TEST scheduler_create_thread 00:05:09.541 ************************************ 00:05:09.541 07:30:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.541 00:05:09.541 real 0m2.614s 00:05:09.541 user 0m0.020s 00:05:09.541 sys 0m0.004s 00:05:09.541 07:30:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.541 07:30:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 07:30:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:09.541 07:30:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60229 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60229 ']' 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60229 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60229 00:05:09.541 killing process with pid 60229 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60229' 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60229 00:05:09.541 07:30:34 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60229 00:05:09.799 [2024-07-26 07:30:35.292692] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.057 ************************************ 00:05:10.057 END TEST event_scheduler 00:05:10.057 ************************************ 00:05:10.057 00:05:10.057 real 0m4.652s 00:05:10.057 user 0m8.470s 00:05:10.057 sys 0m0.433s 00:05:10.058 07:30:35 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.058 07:30:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.058 07:30:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.058 07:30:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.058 07:30:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.058 07:30:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.058 07:30:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.316 ************************************ 00:05:10.316 START TEST app_repeat 00:05:10.316 ************************************ 00:05:10.316 07:30:35 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.316 Process app_repeat pid: 60328 00:05:10.316 spdk_app_start Round 0 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60328 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.316 07:30:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60328' 00:05:10.317 07:30:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.317 07:30:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.317 07:30:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.317 07:30:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60328 /var/tmp/spdk-nbd.sock 00:05:10.317 07:30:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60328 ']' 00:05:10.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.317 07:30:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.317 07:30:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.317 07:30:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.317 07:30:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.317 07:30:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.317 [2024-07-26 07:30:35.696214] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:10.317 [2024-07-26 07:30:35.696306] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60328 ] 00:05:10.317 [2024-07-26 07:30:35.833320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.575 [2024-07-26 07:30:35.941582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.575 [2024-07-26 07:30:35.941590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.575 [2024-07-26 07:30:36.019277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.141 07:30:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.141 07:30:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:11.141 07:30:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.399 Malloc0 00:05:11.399 07:30:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.657 Malloc1 00:05:11.657 07:30:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.657 07:30:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.915 /dev/nbd0 00:05:12.174 07:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.174 07:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.174 1+0 records in 00:05:12.174 1+0 records out 00:05:12.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030098 s, 13.6 MB/s 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.174 07:30:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.174 07:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.174 07:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.174 07:30:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.432 /dev/nbd1 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.432 1+0 records in 00:05:12.432 1+0 records out 00:05:12.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205981 s, 19.9 MB/s 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.432 07:30:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.432 07:30:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.691 { 00:05:12.691 "nbd_device": "/dev/nbd0", 00:05:12.691 "bdev_name": "Malloc0" 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "nbd_device": "/dev/nbd1", 00:05:12.691 "bdev_name": "Malloc1" 00:05:12.691 } 00:05:12.691 ]' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.691 { 00:05:12.691 "nbd_device": "/dev/nbd0", 00:05:12.691 "bdev_name": "Malloc0" 00:05:12.691 }, 00:05:12.691 { 00:05:12.691 "nbd_device": "/dev/nbd1", 00:05:12.691 "bdev_name": "Malloc1" 00:05:12.691 } 00:05:12.691 ]' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.691 /dev/nbd1' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.691 /dev/nbd1' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.691 256+0 records in 00:05:12.691 256+0 records out 00:05:12.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0098134 s, 107 MB/s 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.691 256+0 records in 00:05:12.691 256+0 records out 00:05:12.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202408 s, 51.8 MB/s 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.691 256+0 records in 00:05:12.691 256+0 records out 00:05:12.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275523 s, 38.1 MB/s 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.691 07:30:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.949 07:30:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.515 07:30:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.515 07:30:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.515 07:30:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.082 07:30:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.340 [2024-07-26 07:30:39.703768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.340 [2024-07-26 07:30:39.799535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.340 [2024-07-26 07:30:39.799538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.340 [2024-07-26 07:30:39.876092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.340 [2024-07-26 07:30:39.876228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.340 [2024-07-26 07:30:39.876244] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.868 spdk_app_start Round 1 00:05:16.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.868 07:30:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.868 07:30:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.868 07:30:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60328 /var/tmp/spdk-nbd.sock 00:05:16.868 07:30:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60328 ']' 00:05:16.868 07:30:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.868 07:30:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.868 07:30:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.868 07:30:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.868 07:30:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.126 07:30:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.126 07:30:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:17.126 07:30:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.384 Malloc0 00:05:17.384 07:30:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.672 Malloc1 00:05:17.672 07:30:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.672 07:30:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.930 /dev/nbd0 00:05:17.931 07:30:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.931 07:30:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.931 1+0 records in 00:05:17.931 1+0 records out 00:05:17.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189603 s, 21.6 MB/s 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.931 07:30:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.931 07:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.931 07:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.931 07:30:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.188 /dev/nbd1 00:05:18.188 07:30:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.188 07:30:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:18.188 07:30:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.189 1+0 records in 00:05:18.189 1+0 records out 00:05:18.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379648 s, 10.8 MB/s 00:05:18.189 07:30:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.189 07:30:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:18.189 07:30:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.189 07:30:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:18.189 07:30:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:18.189 07:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.189 07:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.189 07:30:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.189 07:30:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.189 07:30:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.447 07:30:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.447 { 00:05:18.447 "nbd_device": "/dev/nbd0", 00:05:18.447 "bdev_name": "Malloc0" 00:05:18.447 }, 00:05:18.447 { 00:05:18.447 "nbd_device": "/dev/nbd1", 00:05:18.447 "bdev_name": "Malloc1" 00:05:18.447 } 00:05:18.447 ]' 00:05:18.447 07:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.447 { 00:05:18.447 "nbd_device": "/dev/nbd0", 00:05:18.447 "bdev_name": "Malloc0" 00:05:18.447 }, 00:05:18.447 { 00:05:18.447 "nbd_device": "/dev/nbd1", 00:05:18.447 "bdev_name": "Malloc1" 00:05:18.447 } 00:05:18.447 ]' 00:05:18.447 07:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.732 /dev/nbd1' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.732 /dev/nbd1' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.732 256+0 records in 00:05:18.732 256+0 records out 00:05:18.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757393 s, 138 MB/s 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.732 256+0 records in 00:05:18.732 256+0 records out 00:05:18.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218563 s, 48.0 MB/s 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.732 256+0 records in 00:05:18.732 256+0 records out 00:05:18.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256101 s, 40.9 MB/s 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.732 07:30:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.008 07:30:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.266 07:30:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.524 07:30:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.524 07:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.524 07:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.524 07:30:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.524 07:30:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.783 07:30:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.042 [2024-07-26 07:30:45.571445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.301 [2024-07-26 07:30:45.653023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.301 [2024-07-26 07:30:45.653033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.301 [2024-07-26 07:30:45.732763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.301 [2024-07-26 07:30:45.732897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.301 [2024-07-26 07:30:45.732913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.831 07:30:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.831 07:30:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.831 spdk_app_start Round 2 00:05:22.831 07:30:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60328 /var/tmp/spdk-nbd.sock 00:05:22.831 07:30:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60328 ']' 00:05:22.831 07:30:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.831 07:30:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.831 07:30:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.831 07:30:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.831 07:30:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.090 07:30:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.090 07:30:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:23.090 07:30:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.348 Malloc0 00:05:23.348 07:30:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.606 Malloc1 00:05:23.606 07:30:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.606 07:30:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.865 /dev/nbd0 00:05:23.865 07:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.865 07:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.865 1+0 records in 00:05:23.865 1+0 records out 00:05:23.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320788 s, 12.8 MB/s 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.865 07:30:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.865 07:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.865 07:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.865 07:30:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.124 /dev/nbd1 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.124 1+0 records in 00:05:24.124 1+0 records out 00:05:24.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444432 s, 9.2 MB/s 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.124 07:30:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.124 07:30:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.382 07:30:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.383 { 00:05:24.383 "nbd_device": "/dev/nbd0", 00:05:24.383 "bdev_name": "Malloc0" 00:05:24.383 }, 00:05:24.383 { 00:05:24.383 "nbd_device": "/dev/nbd1", 00:05:24.383 "bdev_name": "Malloc1" 00:05:24.383 } 00:05:24.383 ]' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.383 { 00:05:24.383 "nbd_device": "/dev/nbd0", 00:05:24.383 "bdev_name": "Malloc0" 00:05:24.383 }, 00:05:24.383 { 00:05:24.383 "nbd_device": "/dev/nbd1", 00:05:24.383 "bdev_name": "Malloc1" 00:05:24.383 } 00:05:24.383 ]' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.383 /dev/nbd1' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.383 /dev/nbd1' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.383 256+0 records in 00:05:24.383 256+0 records out 00:05:24.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00964979 s, 109 MB/s 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.383 07:30:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.641 256+0 records in 00:05:24.641 256+0 records out 00:05:24.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209274 s, 50.1 MB/s 00:05:24.641 07:30:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.641 07:30:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.641 256+0 records in 00:05:24.641 256+0 records out 00:05:24.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274429 s, 38.2 MB/s 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.641 07:30:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.900 07:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.900 07:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.900 07:30:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.900 07:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.900 07:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.900 07:30:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.901 07:30:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.901 07:30:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.901 07:30:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.901 07:30:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.160 07:30:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.419 07:30:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.419 07:30:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.677 07:30:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.935 [2024-07-26 07:30:51.476559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.194 [2024-07-26 07:30:51.572975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.194 [2024-07-26 07:30:51.572985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.194 [2024-07-26 07:30:51.649211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.194 [2024-07-26 07:30:51.649322] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.194 [2024-07-26 07:30:51.649338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.724 07:30:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60328 /var/tmp/spdk-nbd.sock 00:05:28.724 07:30:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60328 ']' 00:05:28.724 07:30:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.724 07:30:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.724 07:30:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.724 07:30:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.724 07:30:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:28.983 07:30:54 event.app_repeat -- event/event.sh@39 -- # killprocess 60328 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60328 ']' 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60328 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60328 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60328' 00:05:28.983 killing process with pid 60328 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60328 00:05:28.983 07:30:54 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60328 00:05:29.242 spdk_app_start is called in Round 0. 00:05:29.242 Shutdown signal received, stop current app iteration 00:05:29.242 Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 reinitialization... 00:05:29.242 spdk_app_start is called in Round 1. 00:05:29.242 Shutdown signal received, stop current app iteration 00:05:29.242 Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 reinitialization... 00:05:29.242 spdk_app_start is called in Round 2. 00:05:29.242 Shutdown signal received, stop current app iteration 00:05:29.242 Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 reinitialization... 00:05:29.242 spdk_app_start is called in Round 3. 00:05:29.242 Shutdown signal received, stop current app iteration 00:05:29.242 ************************************ 00:05:29.242 END TEST app_repeat 00:05:29.242 ************************************ 00:05:29.242 07:30:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.242 07:30:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.242 00:05:29.242 real 0m19.124s 00:05:29.242 user 0m42.296s 00:05:29.242 sys 0m3.235s 00:05:29.242 07:30:54 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.242 07:30:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.242 07:30:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.242 07:30:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.242 07:30:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.242 07:30:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.242 07:30:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.242 ************************************ 00:05:29.242 START TEST cpu_locks 00:05:29.242 ************************************ 00:05:29.242 07:30:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.501 * Looking for test storage... 00:05:29.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.501 07:30:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.501 07:30:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.501 07:30:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.501 07:30:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.501 07:30:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.501 07:30:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.501 07:30:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.501 ************************************ 00:05:29.501 START TEST default_locks 00:05:29.501 ************************************ 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60761 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60761 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60761 ']' 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.501 07:30:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.501 [2024-07-26 07:30:54.999401] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:29.501 [2024-07-26 07:30:54.999548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60761 ] 00:05:29.760 [2024-07-26 07:30:55.137365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.760 [2024-07-26 07:30:55.257564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.760 [2024-07-26 07:30:55.334452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.706 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.706 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:30.706 07:30:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60761 00:05:30.706 07:30:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60761 00:05:30.706 07:30:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60761 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60761 ']' 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60761 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60761 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.977 killing process with pid 60761 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60761' 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60761 00:05:30.977 07:30:56 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60761 00:05:31.544 07:30:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60761 00:05:31.544 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:31.544 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60761 00:05:31.544 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.544 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.544 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60761 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60761 ']' 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.545 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60761) - No such process 00:05:31.545 ERROR: process (pid: 60761) is no longer running 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.545 00:05:31.545 real 0m2.094s 00:05:31.545 user 0m2.154s 00:05:31.545 sys 0m0.682s 00:05:31.545 ************************************ 00:05:31.545 END TEST default_locks 00:05:31.545 ************************************ 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.545 07:30:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.545 07:30:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.545 07:30:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.545 07:30:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.545 07:30:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.545 ************************************ 00:05:31.545 START TEST default_locks_via_rpc 00:05:31.545 ************************************ 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:31.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60813 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60813 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60813 ']' 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.545 07:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.804 [2024-07-26 07:30:57.148262] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:31.804 [2024-07-26 07:30:57.148368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:05:31.804 [2024-07-26 07:30:57.285715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.804 [2024-07-26 07:30:57.394437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.062 [2024-07-26 07:30:57.473303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60813 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60813 00:05:32.629 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60813 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60813 ']' 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60813 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60813 00:05:33.196 killing process with pid 60813 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60813' 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60813 00:05:33.196 07:30:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60813 00:05:33.764 00:05:33.764 real 0m2.093s 00:05:33.764 user 0m2.136s 00:05:33.764 sys 0m0.689s 00:05:33.764 07:30:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.764 ************************************ 00:05:33.764 END TEST default_locks_via_rpc 00:05:33.764 ************************************ 00:05:33.764 07:30:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.764 07:30:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.764 07:30:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.764 07:30:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.764 07:30:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.764 ************************************ 00:05:33.764 START TEST non_locking_app_on_locked_coremask 00:05:33.764 ************************************ 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60864 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60864 /var/tmp/spdk.sock 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60864 ']' 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.764 07:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.764 [2024-07-26 07:30:59.295799] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:33.764 [2024-07-26 07:30:59.295899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60864 ] 00:05:34.023 [2024-07-26 07:30:59.435404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.023 [2024-07-26 07:30:59.557253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.282 [2024-07-26 07:30:59.637232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:34.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60880 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60880 /var/tmp/spdk2.sock 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60880 ']' 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.848 07:31:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.848 [2024-07-26 07:31:00.291900] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:34.848 [2024-07-26 07:31:00.292182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:05:34.848 [2024-07-26 07:31:00.429481] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.849 [2024-07-26 07:31:00.433523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.107 [2024-07-26 07:31:00.654070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.365 [2024-07-26 07:31:00.812863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:35.931 07:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.931 07:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:35.931 07:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60864 00:05:35.931 07:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60864 00:05:35.931 07:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60864 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60864 ']' 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60864 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60864 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.866 killing process with pid 60864 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60864' 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60864 00:05:36.866 07:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60864 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60880 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60880 ']' 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60880 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60880 00:05:37.801 killing process with pid 60880 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60880' 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60880 00:05:37.801 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60880 00:05:38.368 00:05:38.368 real 0m4.644s 00:05:38.368 user 0m4.833s 00:05:38.368 sys 0m1.282s 00:05:38.368 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.368 07:31:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.368 ************************************ 00:05:38.368 END TEST non_locking_app_on_locked_coremask 00:05:38.368 ************************************ 00:05:38.368 07:31:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.368 07:31:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.368 07:31:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.368 07:31:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.368 ************************************ 00:05:38.368 START TEST locking_app_on_unlocked_coremask 00:05:38.368 ************************************ 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60958 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60958 /var/tmp/spdk.sock 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60958 ']' 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.368 07:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.627 [2024-07-26 07:31:03.997190] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:38.627 [2024-07-26 07:31:03.997289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60958 ] 00:05:38.627 [2024-07-26 07:31:04.131266] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.627 [2024-07-26 07:31:04.131303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.886 [2024-07-26 07:31:04.264545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.886 [2024-07-26 07:31:04.340923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60974 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60974 /var/tmp/spdk2.sock 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60974 ']' 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.450 07:31:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.450 [2024-07-26 07:31:05.007682] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:39.450 [2024-07-26 07:31:05.007991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60974 ] 00:05:39.708 [2024-07-26 07:31:05.154591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.966 [2024-07-26 07:31:05.357488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.966 [2024-07-26 07:31:05.519329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.533 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.533 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.533 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60974 00:05:40.533 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60974 00:05:40.533 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60958 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60958 ']' 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60958 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60958 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.501 killing process with pid 60958 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60958' 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60958 00:05:41.501 07:31:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60958 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60974 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60974 ']' 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60974 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60974 00:05:42.442 killing process with pid 60974 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60974' 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60974 00:05:42.442 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60974 00:05:43.008 ************************************ 00:05:43.008 END TEST locking_app_on_unlocked_coremask 00:05:43.008 ************************************ 00:05:43.008 00:05:43.008 real 0m4.664s 00:05:43.008 user 0m4.830s 00:05:43.008 sys 0m1.376s 00:05:43.008 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.008 07:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.266 07:31:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:43.266 07:31:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.266 07:31:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.266 07:31:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.266 ************************************ 00:05:43.266 START TEST locking_app_on_locked_coremask 00:05:43.266 ************************************ 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61047 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61047 /var/tmp/spdk.sock 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61047 ']' 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.266 07:31:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.266 [2024-07-26 07:31:08.702110] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:43.266 [2024-07-26 07:31:08.702205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61047 ] 00:05:43.266 [2024-07-26 07:31:08.832858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.524 [2024-07-26 07:31:08.945537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.524 [2024-07-26 07:31:09.024039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61063 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61063 /var/tmp/spdk2.sock 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61063 /var/tmp/spdk2.sock 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61063 /var/tmp/spdk2.sock 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61063 ']' 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.458 07:31:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.459 [2024-07-26 07:31:09.750708] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:44.459 [2024-07-26 07:31:09.750789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:05:44.459 [2024-07-26 07:31:09.890327] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61047 has claimed it. 00:05:44.459 [2024-07-26 07:31:09.890419] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.025 ERROR: process (pid: 61063) is no longer running 00:05:45.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61063) - No such process 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61047 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61047 00:05:45.025 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.283 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61047 00:05:45.283 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61047 ']' 00:05:45.283 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61047 00:05:45.283 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61047 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.284 killing process with pid 61047 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61047' 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61047 00:05:45.284 07:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61047 00:05:45.852 00:05:45.852 real 0m2.767s 00:05:45.852 user 0m3.045s 00:05:45.852 sys 0m0.725s 00:05:45.852 ************************************ 00:05:45.852 END TEST locking_app_on_locked_coremask 00:05:45.852 ************************************ 00:05:45.852 07:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.852 07:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.111 07:31:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.111 07:31:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.111 07:31:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.111 07:31:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.111 ************************************ 00:05:46.111 START TEST locking_overlapped_coremask 00:05:46.111 ************************************ 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61108 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61108 /var/tmp/spdk.sock 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61108 ']' 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.111 07:31:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.111 [2024-07-26 07:31:11.535126] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:46.111 [2024-07-26 07:31:11.535235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61108 ] 00:05:46.111 [2024-07-26 07:31:11.672540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.369 [2024-07-26 07:31:11.786362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.369 [2024-07-26 07:31:11.786529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.369 [2024-07-26 07:31:11.786534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.369 [2024-07-26 07:31:11.864830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:46.935 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.935 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.935 07:31:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:46.935 07:31:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61126 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61126 /var/tmp/spdk2.sock 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61126 /var/tmp/spdk2.sock 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61126 /var/tmp/spdk2.sock 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61126 ']' 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.936 07:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.194 [2024-07-26 07:31:12.553601] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:47.194 [2024-07-26 07:31:12.553690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61126 ] 00:05:47.194 [2024-07-26 07:31:12.701038] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61108 has claimed it. 00:05:47.194 [2024-07-26 07:31:12.701113] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.760 ERROR: process (pid: 61126) is no longer running 00:05:47.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61126) - No such process 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61108 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61108 ']' 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61108 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61108 00:05:47.760 killing process with pid 61108 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.760 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61108' 00:05:47.761 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61108 00:05:47.761 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61108 00:05:48.325 00:05:48.325 real 0m2.430s 00:05:48.325 user 0m6.577s 00:05:48.325 sys 0m0.525s 00:05:48.325 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.325 ************************************ 00:05:48.325 07:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.325 END TEST locking_overlapped_coremask 00:05:48.325 ************************************ 00:05:48.582 07:31:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:48.582 07:31:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.582 07:31:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.582 07:31:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.582 ************************************ 00:05:48.582 START TEST locking_overlapped_coremask_via_rpc 00:05:48.582 ************************************ 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61176 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61176 /var/tmp/spdk.sock 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61176 ']' 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.582 07:31:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.582 [2024-07-26 07:31:14.022965] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:48.582 [2024-07-26 07:31:14.023090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61176 ] 00:05:48.582 [2024-07-26 07:31:14.161951] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.582 [2024-07-26 07:31:14.162028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.840 [2024-07-26 07:31:14.332270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.840 [2024-07-26 07:31:14.332422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.840 [2024-07-26 07:31:14.332430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.840 [2024-07-26 07:31:14.414240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61195 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61195 /var/tmp/spdk2.sock 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61195 ']' 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.774 07:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.774 [2024-07-26 07:31:15.099221] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:49.774 [2024-07-26 07:31:15.099333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:05:49.774 [2024-07-26 07:31:15.242388] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.774 [2024-07-26 07:31:15.242442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.033 [2024-07-26 07:31:15.556714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.033 [2024-07-26 07:31:15.560597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:50.033 [2024-07-26 07:31:15.560597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.292 [2024-07-26 07:31:15.714346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.859 [2024-07-26 07:31:16.226732] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61176 has claimed it. 00:05:50.859 request: 00:05:50.859 { 00:05:50.859 "method": "framework_enable_cpumask_locks", 00:05:50.859 "req_id": 1 00:05:50.859 } 00:05:50.859 Got JSON-RPC error response 00:05:50.859 response: 00:05:50.859 { 00:05:50.859 "code": -32603, 00:05:50.859 "message": "Failed to claim CPU core: 2" 00:05:50.859 } 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61176 /var/tmp/spdk.sock 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61176 ']' 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.859 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61195 /var/tmp/spdk2.sock 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61195 ']' 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.118 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.377 ************************************ 00:05:51.377 END TEST locking_overlapped_coremask_via_rpc 00:05:51.377 ************************************ 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.377 00:05:51.377 real 0m2.865s 00:05:51.377 user 0m1.413s 00:05:51.377 sys 0m0.222s 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.377 07:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.377 07:31:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:51.377 07:31:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61176 ]] 00:05:51.377 07:31:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61176 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61176 ']' 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61176 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61176 00:05:51.377 killing process with pid 61176 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61176' 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61176 00:05:51.377 07:31:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61176 00:05:51.944 07:31:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61195 ]] 00:05:51.944 07:31:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61195 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61195 ']' 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61195 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61195 00:05:51.944 killing process with pid 61195 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61195' 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61195 00:05:51.944 07:31:17 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61195 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61176 ]] 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61176 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61176 ']' 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61176 00:05:52.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61176) - No such process 00:05:52.880 Process with pid 61176 is not found 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61176 is not found' 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61195 ]] 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61195 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61195 ']' 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61195 00:05:52.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61195) - No such process 00:05:52.880 Process with pid 61195 is not found 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61195 is not found' 00:05:52.880 07:31:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.880 ************************************ 00:05:52.880 END TEST cpu_locks 00:05:52.880 ************************************ 00:05:52.880 00:05:52.880 real 0m23.310s 00:05:52.880 user 0m39.494s 00:05:52.880 sys 0m6.572s 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.880 07:31:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.880 00:05:52.880 real 0m51.641s 00:05:52.880 user 1m36.987s 00:05:52.880 sys 0m10.693s 00:05:52.880 07:31:18 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.880 07:31:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.880 ************************************ 00:05:52.880 END TEST event 00:05:52.880 ************************************ 00:05:52.880 07:31:18 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:52.880 07:31:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.880 07:31:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.880 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:05:52.880 ************************************ 00:05:52.880 START TEST thread 00:05:52.880 ************************************ 00:05:52.880 07:31:18 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:52.880 * Looking for test storage... 00:05:52.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:52.880 07:31:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.880 07:31:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:52.880 07:31:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.880 07:31:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.880 ************************************ 00:05:52.880 START TEST thread_poller_perf 00:05:52.880 ************************************ 00:05:52.880 07:31:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.880 [2024-07-26 07:31:18.350659] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:52.880 [2024-07-26 07:31:18.350764] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61323 ] 00:05:53.138 [2024-07-26 07:31:18.492214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.138 [2024-07-26 07:31:18.638751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.138 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:54.511 ====================================== 00:05:54.511 busy:2212032274 (cyc) 00:05:54.511 total_run_count: 315000 00:05:54.511 tsc_hz: 2200000000 (cyc) 00:05:54.511 ====================================== 00:05:54.511 poller_cost: 7022 (cyc), 3191 (nsec) 00:05:54.511 00:05:54.511 real 0m1.437s 00:05:54.511 user 0m1.246s 00:05:54.511 sys 0m0.078s 00:05:54.511 07:31:19 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.511 07:31:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.511 ************************************ 00:05:54.511 END TEST thread_poller_perf 00:05:54.511 ************************************ 00:05:54.511 07:31:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.511 07:31:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:54.511 07:31:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.511 07:31:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.511 ************************************ 00:05:54.511 START TEST thread_poller_perf 00:05:54.511 ************************************ 00:05:54.511 07:31:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.511 [2024-07-26 07:31:19.839945] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:54.511 [2024-07-26 07:31:19.840057] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61359 ] 00:05:54.511 [2024-07-26 07:31:19.978676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.511 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.511 [2024-07-26 07:31:20.105323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.891 ====================================== 00:05:55.891 busy:2202585578 (cyc) 00:05:55.891 total_run_count: 4257000 00:05:55.891 tsc_hz: 2200000000 (cyc) 00:05:55.891 ====================================== 00:05:55.891 poller_cost: 517 (cyc), 235 (nsec) 00:05:55.891 00:05:55.891 real 0m1.411s 00:05:55.891 user 0m1.233s 00:05:55.891 sys 0m0.070s 00:05:55.891 07:31:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.891 07:31:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.891 ************************************ 00:05:55.891 END TEST thread_poller_perf 00:05:55.891 ************************************ 00:05:55.891 07:31:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:55.891 00:05:55.891 real 0m3.045s 00:05:55.891 user 0m2.540s 00:05:55.891 sys 0m0.270s 00:05:55.891 07:31:21 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.891 07:31:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.891 ************************************ 00:05:55.891 END TEST thread 00:05:55.891 ************************************ 00:05:55.891 07:31:21 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:55.891 07:31:21 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:55.891 07:31:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.891 07:31:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.891 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:55.891 ************************************ 00:05:55.891 START TEST app_cmdline 00:05:55.891 ************************************ 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:55.891 * Looking for test storage... 00:05:55.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:55.891 07:31:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:55.891 07:31:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61433 00:05:55.891 07:31:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:55.891 07:31:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61433 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61433 ']' 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.891 07:31:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.891 [2024-07-26 07:31:21.477200] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:55.891 [2024-07-26 07:31:21.477296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:05:56.148 [2024-07-26 07:31:21.615863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.148 [2024-07-26 07:31:21.733641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.406 [2024-07-26 07:31:21.812850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.973 07:31:22 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.973 07:31:22 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:56.973 07:31:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:57.231 { 00:05:57.231 "version": "SPDK v24.09-pre git sha1 5c22a76d6", 00:05:57.231 "fields": { 00:05:57.231 "major": 24, 00:05:57.231 "minor": 9, 00:05:57.231 "patch": 0, 00:05:57.231 "suffix": "-pre", 00:05:57.231 "commit": "5c22a76d6" 00:05:57.231 } 00:05:57.231 } 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.231 07:31:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:57.231 07:31:22 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.490 request: 00:05:57.490 { 00:05:57.490 "method": "env_dpdk_get_mem_stats", 00:05:57.490 "req_id": 1 00:05:57.490 } 00:05:57.490 Got JSON-RPC error response 00:05:57.490 response: 00:05:57.490 { 00:05:57.490 "code": -32601, 00:05:57.490 "message": "Method not found" 00:05:57.490 } 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.490 07:31:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61433 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61433 ']' 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61433 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.490 07:31:22 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61433 00:05:57.490 07:31:23 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.490 killing process with pid 61433 00:05:57.490 07:31:23 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.490 07:31:23 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61433' 00:05:57.490 07:31:23 app_cmdline -- common/autotest_common.sh@969 -- # kill 61433 00:05:57.490 07:31:23 app_cmdline -- common/autotest_common.sh@974 -- # wait 61433 00:05:58.055 ************************************ 00:05:58.055 END TEST app_cmdline 00:05:58.055 ************************************ 00:05:58.055 00:05:58.055 real 0m2.237s 00:05:58.055 user 0m2.630s 00:05:58.055 sys 0m0.579s 00:05:58.055 07:31:23 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.055 07:31:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.055 07:31:23 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.055 07:31:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.055 07:31:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.055 07:31:23 -- common/autotest_common.sh@10 -- # set +x 00:05:58.055 ************************************ 00:05:58.055 START TEST version 00:05:58.055 ************************************ 00:05:58.055 07:31:23 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.313 * Looking for test storage... 00:05:58.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:58.313 07:31:23 version -- app/version.sh@17 -- # get_header_version major 00:05:58.313 07:31:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # cut -f2 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.313 07:31:23 version -- app/version.sh@17 -- # major=24 00:05:58.313 07:31:23 version -- app/version.sh@18 -- # get_header_version minor 00:05:58.313 07:31:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # cut -f2 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.313 07:31:23 version -- app/version.sh@18 -- # minor=9 00:05:58.313 07:31:23 version -- app/version.sh@19 -- # get_header_version patch 00:05:58.313 07:31:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # cut -f2 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.313 07:31:23 version -- app/version.sh@19 -- # patch=0 00:05:58.313 07:31:23 version -- app/version.sh@20 -- # get_header_version suffix 00:05:58.313 07:31:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # cut -f2 00:05:58.313 07:31:23 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.313 07:31:23 version -- app/version.sh@20 -- # suffix=-pre 00:05:58.313 07:31:23 version -- app/version.sh@22 -- # version=24.9 00:05:58.313 07:31:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.313 07:31:23 version -- app/version.sh@28 -- # version=24.9rc0 00:05:58.313 07:31:23 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:58.313 07:31:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.313 07:31:23 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:58.313 07:31:23 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:58.313 00:05:58.313 real 0m0.148s 00:05:58.313 user 0m0.074s 00:05:58.313 sys 0m0.102s 00:05:58.313 07:31:23 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.313 07:31:23 version -- common/autotest_common.sh@10 -- # set +x 00:05:58.313 ************************************ 00:05:58.313 END TEST version 00:05:58.313 ************************************ 00:05:58.313 07:31:23 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:58.313 07:31:23 -- spdk/autotest.sh@202 -- # uname -s 00:05:58.313 07:31:23 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:58.313 07:31:23 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:58.313 07:31:23 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:05:58.313 07:31:23 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:05:58.313 07:31:23 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:58.313 07:31:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.313 07:31:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.313 07:31:23 -- common/autotest_common.sh@10 -- # set +x 00:05:58.313 ************************************ 00:05:58.313 START TEST spdk_dd 00:05:58.313 ************************************ 00:05:58.313 07:31:23 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:58.313 * Looking for test storage... 00:05:58.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:58.313 07:31:23 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.313 07:31:23 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.313 07:31:23 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.313 07:31:23 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.313 07:31:23 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.313 07:31:23 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.313 07:31:23 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.313 07:31:23 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:58.313 07:31:23 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.313 07:31:23 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.881 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.881 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.881 07:31:24 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:58.881 07:31:24 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:58.881 07:31:24 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:58.881 07:31:24 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:58.881 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:58.882 * spdk_dd linked to liburing 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:58.882 07:31:24 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:58.882 07:31:24 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:58.883 07:31:24 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:58.883 07:31:24 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:58.883 07:31:24 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:58.883 07:31:24 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:58.883 07:31:24 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:58.883 07:31:24 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:58.883 07:31:24 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:58.883 07:31:24 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:58.883 07:31:24 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.883 07:31:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:58.883 ************************************ 00:05:58.883 START TEST spdk_dd_basic_rw 00:05:58.883 ************************************ 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:58.883 * Looking for test storage... 00:05:58.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.883 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:59.143 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:59.143 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:59.144 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:59.144 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:59.144 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 ************************************ 00:05:59.145 START TEST dd_bs_lt_native_bs 00:05:59.145 ************************************ 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:59.145 07:31:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:59.145 { 00:05:59.145 "subsystems": [ 00:05:59.145 { 00:05:59.145 "subsystem": "bdev", 00:05:59.145 "config": [ 00:05:59.145 { 00:05:59.145 "params": { 00:05:59.145 "trtype": "pcie", 00:05:59.145 "traddr": "0000:00:10.0", 00:05:59.145 "name": "Nvme0" 00:05:59.145 }, 00:05:59.145 "method": "bdev_nvme_attach_controller" 00:05:59.145 }, 00:05:59.145 { 00:05:59.145 "method": "bdev_wait_for_examine" 00:05:59.145 } 00:05:59.145 ] 00:05:59.145 } 00:05:59.145 ] 00:05:59.145 } 00:05:59.145 [2024-07-26 07:31:24.739094] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:05:59.146 [2024-07-26 07:31:24.739202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61755 ] 00:05:59.404 [2024-07-26 07:31:24.878381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.661 [2024-07-26 07:31:25.009691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.661 [2024-07-26 07:31:25.090959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.661 [2024-07-26 07:31:25.210329] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:59.661 [2024-07-26 07:31:25.210408] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.919 [2024-07-26 07:31:25.387426] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.919 00:05:59.919 real 0m0.831s 00:05:59.919 user 0m0.574s 00:05:59.919 sys 0m0.212s 00:05:59.919 ************************************ 00:05:59.919 END TEST dd_bs_lt_native_bs 00:05:59.919 ************************************ 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.919 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.178 ************************************ 00:06:00.178 START TEST dd_rw 00:06:00.178 ************************************ 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:00.178 07:31:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 07:31:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:00.745 07:31:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:00.745 07:31:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.745 07:31:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.745 { 00:06:00.745 "subsystems": [ 00:06:00.745 { 00:06:00.745 "subsystem": "bdev", 00:06:00.745 "config": [ 00:06:00.745 { 00:06:00.745 "params": { 00:06:00.745 "trtype": "pcie", 00:06:00.745 "traddr": "0000:00:10.0", 00:06:00.745 "name": "Nvme0" 00:06:00.745 }, 00:06:00.745 "method": "bdev_nvme_attach_controller" 00:06:00.745 }, 00:06:00.745 { 00:06:00.745 "method": "bdev_wait_for_examine" 00:06:00.745 } 00:06:00.745 ] 00:06:00.745 } 00:06:00.745 ] 00:06:00.745 } 00:06:00.745 [2024-07-26 07:31:26.242116] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:00.745 [2024-07-26 07:31:26.242241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61792 ] 00:06:01.004 [2024-07-26 07:31:26.382315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.004 [2024-07-26 07:31:26.498654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.004 [2024-07-26 07:31:26.577894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.520  Copying: 60/60 [kB] (average 19 MBps) 00:06:01.520 00:06:01.520 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:01.520 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:01.520 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.520 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.520 { 00:06:01.520 "subsystems": [ 00:06:01.520 { 00:06:01.521 "subsystem": "bdev", 00:06:01.521 "config": [ 00:06:01.521 { 00:06:01.521 "params": { 00:06:01.521 "trtype": "pcie", 00:06:01.521 "traddr": "0000:00:10.0", 00:06:01.521 "name": "Nvme0" 00:06:01.521 }, 00:06:01.521 "method": "bdev_nvme_attach_controller" 00:06:01.521 }, 00:06:01.521 { 00:06:01.521 "method": "bdev_wait_for_examine" 00:06:01.521 } 00:06:01.521 ] 00:06:01.521 } 00:06:01.521 ] 00:06:01.521 } 00:06:01.521 [2024-07-26 07:31:27.061633] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:01.521 [2024-07-26 07:31:27.061733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61811 ] 00:06:01.779 [2024-07-26 07:31:27.198224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.779 [2024-07-26 07:31:27.292376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.779 [2024-07-26 07:31:27.371455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.296  Copying: 60/60 [kB] (average 14 MBps) 00:06:02.296 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.296 07:31:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.296 [2024-07-26 07:31:27.836658] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:02.296 [2024-07-26 07:31:27.836755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:06:02.296 { 00:06:02.296 "subsystems": [ 00:06:02.296 { 00:06:02.296 "subsystem": "bdev", 00:06:02.296 "config": [ 00:06:02.296 { 00:06:02.296 "params": { 00:06:02.296 "trtype": "pcie", 00:06:02.296 "traddr": "0000:00:10.0", 00:06:02.296 "name": "Nvme0" 00:06:02.296 }, 00:06:02.296 "method": "bdev_nvme_attach_controller" 00:06:02.296 }, 00:06:02.296 { 00:06:02.296 "method": "bdev_wait_for_examine" 00:06:02.296 } 00:06:02.296 ] 00:06:02.296 } 00:06:02.296 ] 00:06:02.296 } 00:06:02.555 [2024-07-26 07:31:27.972729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.555 [2024-07-26 07:31:28.091688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.814 [2024-07-26 07:31:28.169844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.072  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:03.072 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:03.072 07:31:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.637 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:03.637 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:03.637 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.637 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.637 [2024-07-26 07:31:29.212785] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:03.637 [2024-07-26 07:31:29.213670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61851 ] 00:06:03.637 { 00:06:03.637 "subsystems": [ 00:06:03.637 { 00:06:03.637 "subsystem": "bdev", 00:06:03.637 "config": [ 00:06:03.637 { 00:06:03.637 "params": { 00:06:03.637 "trtype": "pcie", 00:06:03.637 "traddr": "0000:00:10.0", 00:06:03.637 "name": "Nvme0" 00:06:03.637 }, 00:06:03.637 "method": "bdev_nvme_attach_controller" 00:06:03.637 }, 00:06:03.637 { 00:06:03.637 "method": "bdev_wait_for_examine" 00:06:03.637 } 00:06:03.637 ] 00:06:03.637 } 00:06:03.637 ] 00:06:03.637 } 00:06:03.896 [2024-07-26 07:31:29.353338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.896 [2024-07-26 07:31:29.467294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.154 [2024-07-26 07:31:29.524431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.413  Copying: 60/60 [kB] (average 58 MBps) 00:06:04.413 00:06:04.413 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:04.413 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.413 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.413 07:31:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.413 { 00:06:04.413 "subsystems": [ 00:06:04.413 { 00:06:04.413 "subsystem": "bdev", 00:06:04.413 "config": [ 00:06:04.413 { 00:06:04.413 "params": { 00:06:04.413 "trtype": "pcie", 00:06:04.413 "traddr": "0000:00:10.0", 00:06:04.413 "name": "Nvme0" 00:06:04.413 }, 00:06:04.413 "method": "bdev_nvme_attach_controller" 00:06:04.413 }, 00:06:04.413 { 00:06:04.413 "method": "bdev_wait_for_examine" 00:06:04.413 } 00:06:04.413 ] 00:06:04.413 } 00:06:04.413 ] 00:06:04.413 } 00:06:04.413 [2024-07-26 07:31:29.995723] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:04.413 [2024-07-26 07:31:29.995821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61864 ] 00:06:04.671 [2024-07-26 07:31:30.132015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.671 [2024-07-26 07:31:30.243865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.930 [2024-07-26 07:31:30.323039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.188  Copying: 60/60 [kB] (average 58 MBps) 00:06:05.188 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.188 07:31:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.188 [2024-07-26 07:31:30.783003] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:05.188 [2024-07-26 07:31:30.783097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61884 ] 00:06:05.447 { 00:06:05.447 "subsystems": [ 00:06:05.447 { 00:06:05.447 "subsystem": "bdev", 00:06:05.447 "config": [ 00:06:05.447 { 00:06:05.447 "params": { 00:06:05.447 "trtype": "pcie", 00:06:05.447 "traddr": "0000:00:10.0", 00:06:05.447 "name": "Nvme0" 00:06:05.447 }, 00:06:05.447 "method": "bdev_nvme_attach_controller" 00:06:05.447 }, 00:06:05.447 { 00:06:05.447 "method": "bdev_wait_for_examine" 00:06:05.447 } 00:06:05.447 ] 00:06:05.447 } 00:06:05.447 ] 00:06:05.447 } 00:06:05.447 [2024-07-26 07:31:30.913282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.447 [2024-07-26 07:31:31.016280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.716 [2024-07-26 07:31:31.094308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.975  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:05.975 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:05.975 07:31:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.563 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:06.563 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:06.563 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.563 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.563 [2024-07-26 07:31:32.121726] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:06.563 [2024-07-26 07:31:32.121834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61910 ] 00:06:06.563 { 00:06:06.563 "subsystems": [ 00:06:06.563 { 00:06:06.563 "subsystem": "bdev", 00:06:06.563 "config": [ 00:06:06.563 { 00:06:06.563 "params": { 00:06:06.563 "trtype": "pcie", 00:06:06.563 "traddr": "0000:00:10.0", 00:06:06.563 "name": "Nvme0" 00:06:06.563 }, 00:06:06.563 "method": "bdev_nvme_attach_controller" 00:06:06.563 }, 00:06:06.563 { 00:06:06.563 "method": "bdev_wait_for_examine" 00:06:06.563 } 00:06:06.563 ] 00:06:06.563 } 00:06:06.563 ] 00:06:06.563 } 00:06:06.822 [2024-07-26 07:31:32.259541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.822 [2024-07-26 07:31:32.353149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.080 [2024-07-26 07:31:32.431918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.339  Copying: 56/56 [kB] (average 27 MBps) 00:06:07.339 00:06:07.339 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:07.339 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:07.339 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.339 07:31:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.339 [2024-07-26 07:31:32.899543] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:07.339 [2024-07-26 07:31:32.899645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:06:07.339 { 00:06:07.339 "subsystems": [ 00:06:07.339 { 00:06:07.339 "subsystem": "bdev", 00:06:07.339 "config": [ 00:06:07.339 { 00:06:07.339 "params": { 00:06:07.339 "trtype": "pcie", 00:06:07.339 "traddr": "0000:00:10.0", 00:06:07.339 "name": "Nvme0" 00:06:07.339 }, 00:06:07.339 "method": "bdev_nvme_attach_controller" 00:06:07.339 }, 00:06:07.339 { 00:06:07.339 "method": "bdev_wait_for_examine" 00:06:07.339 } 00:06:07.339 ] 00:06:07.339 } 00:06:07.339 ] 00:06:07.339 } 00:06:07.597 [2024-07-26 07:31:33.039032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.597 [2024-07-26 07:31:33.157255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.856 [2024-07-26 07:31:33.235204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.114  Copying: 56/56 [kB] (average 54 MBps) 00:06:08.114 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.114 07:31:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.114 [2024-07-26 07:31:33.701239] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:08.114 [2024-07-26 07:31:33.701901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61939 ] 00:06:08.114 { 00:06:08.114 "subsystems": [ 00:06:08.114 { 00:06:08.114 "subsystem": "bdev", 00:06:08.114 "config": [ 00:06:08.114 { 00:06:08.114 "params": { 00:06:08.114 "trtype": "pcie", 00:06:08.114 "traddr": "0000:00:10.0", 00:06:08.114 "name": "Nvme0" 00:06:08.114 }, 00:06:08.114 "method": "bdev_nvme_attach_controller" 00:06:08.114 }, 00:06:08.114 { 00:06:08.114 "method": "bdev_wait_for_examine" 00:06:08.114 } 00:06:08.114 ] 00:06:08.114 } 00:06:08.114 ] 00:06:08.114 } 00:06:08.373 [2024-07-26 07:31:33.840573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.373 [2024-07-26 07:31:33.942690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.632 [2024-07-26 07:31:34.017120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.891  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:08.891 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:08.891 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.458 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:09.458 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:09.458 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.458 07:31:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.458 [2024-07-26 07:31:35.006761] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:09.458 [2024-07-26 07:31:35.006840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61963 ] 00:06:09.458 { 00:06:09.458 "subsystems": [ 00:06:09.458 { 00:06:09.458 "subsystem": "bdev", 00:06:09.458 "config": [ 00:06:09.458 { 00:06:09.458 "params": { 00:06:09.458 "trtype": "pcie", 00:06:09.458 "traddr": "0000:00:10.0", 00:06:09.458 "name": "Nvme0" 00:06:09.458 }, 00:06:09.458 "method": "bdev_nvme_attach_controller" 00:06:09.458 }, 00:06:09.458 { 00:06:09.458 "method": "bdev_wait_for_examine" 00:06:09.458 } 00:06:09.458 ] 00:06:09.458 } 00:06:09.458 ] 00:06:09.458 } 00:06:09.717 [2024-07-26 07:31:35.144249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.717 [2024-07-26 07:31:35.268966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.975 [2024-07-26 07:31:35.347654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.233  Copying: 56/56 [kB] (average 54 MBps) 00:06:10.233 00:06:10.233 07:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:10.233 07:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:10.233 07:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.233 07:31:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.233 { 00:06:10.233 "subsystems": [ 00:06:10.233 { 00:06:10.233 "subsystem": "bdev", 00:06:10.233 "config": [ 00:06:10.233 { 00:06:10.233 "params": { 00:06:10.233 "trtype": "pcie", 00:06:10.233 "traddr": "0000:00:10.0", 00:06:10.233 "name": "Nvme0" 00:06:10.233 }, 00:06:10.233 "method": "bdev_nvme_attach_controller" 00:06:10.233 }, 00:06:10.233 { 00:06:10.233 "method": "bdev_wait_for_examine" 00:06:10.233 } 00:06:10.233 ] 00:06:10.233 } 00:06:10.233 ] 00:06:10.233 } 00:06:10.233 [2024-07-26 07:31:35.821232] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:10.233 [2024-07-26 07:31:35.821360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:06:10.492 [2024-07-26 07:31:35.966944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.492 [2024-07-26 07:31:36.062951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.751 [2024-07-26 07:31:36.141708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.010  Copying: 56/56 [kB] (average 54 MBps) 00:06:11.010 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.010 07:31:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.269 { 00:06:11.269 "subsystems": [ 00:06:11.269 { 00:06:11.269 "subsystem": "bdev", 00:06:11.269 "config": [ 00:06:11.269 { 00:06:11.269 "params": { 00:06:11.269 "trtype": "pcie", 00:06:11.269 "traddr": "0000:00:10.0", 00:06:11.269 "name": "Nvme0" 00:06:11.269 }, 00:06:11.269 "method": "bdev_nvme_attach_controller" 00:06:11.269 }, 00:06:11.269 { 00:06:11.269 "method": "bdev_wait_for_examine" 00:06:11.269 } 00:06:11.269 ] 00:06:11.269 } 00:06:11.269 ] 00:06:11.269 } 00:06:11.269 [2024-07-26 07:31:36.636338] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:11.270 [2024-07-26 07:31:36.636492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61998 ] 00:06:11.270 [2024-07-26 07:31:36.785999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.529 [2024-07-26 07:31:36.937640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.529 [2024-07-26 07:31:37.017660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.046  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:12.046 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.046 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.613 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:12.613 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.613 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.613 07:31:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.613 { 00:06:12.613 "subsystems": [ 00:06:12.613 { 00:06:12.613 "subsystem": "bdev", 00:06:12.613 "config": [ 00:06:12.613 { 00:06:12.613 "params": { 00:06:12.613 "trtype": "pcie", 00:06:12.613 "traddr": "0000:00:10.0", 00:06:12.613 "name": "Nvme0" 00:06:12.613 }, 00:06:12.613 "method": "bdev_nvme_attach_controller" 00:06:12.613 }, 00:06:12.613 { 00:06:12.613 "method": "bdev_wait_for_examine" 00:06:12.613 } 00:06:12.613 ] 00:06:12.613 } 00:06:12.613 ] 00:06:12.613 } 00:06:12.613 [2024-07-26 07:31:38.069654] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:12.613 [2024-07-26 07:31:38.069885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62017 ] 00:06:12.613 [2024-07-26 07:31:38.203901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.872 [2024-07-26 07:31:38.347173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.872 [2024-07-26 07:31:38.426336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.389  Copying: 48/48 [kB] (average 46 MBps) 00:06:13.389 00:06:13.389 07:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:13.389 07:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.389 07:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.389 07:31:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.389 [2024-07-26 07:31:38.913783] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:13.389 [2024-07-26 07:31:38.913883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62036 ] 00:06:13.389 { 00:06:13.389 "subsystems": [ 00:06:13.389 { 00:06:13.389 "subsystem": "bdev", 00:06:13.389 "config": [ 00:06:13.389 { 00:06:13.389 "params": { 00:06:13.389 "trtype": "pcie", 00:06:13.389 "traddr": "0000:00:10.0", 00:06:13.389 "name": "Nvme0" 00:06:13.389 }, 00:06:13.389 "method": "bdev_nvme_attach_controller" 00:06:13.389 }, 00:06:13.389 { 00:06:13.389 "method": "bdev_wait_for_examine" 00:06:13.389 } 00:06:13.389 ] 00:06:13.389 } 00:06:13.389 ] 00:06:13.389 } 00:06:13.648 [2024-07-26 07:31:39.053367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.648 [2024-07-26 07:31:39.183110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.907 [2024-07-26 07:31:39.265450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.165  Copying: 48/48 [kB] (average 46 MBps) 00:06:14.165 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.165 07:31:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.165 [2024-07-26 07:31:39.751507] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:14.165 [2024-07-26 07:31:39.751613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62057 ] 00:06:14.165 { 00:06:14.165 "subsystems": [ 00:06:14.165 { 00:06:14.165 "subsystem": "bdev", 00:06:14.165 "config": [ 00:06:14.165 { 00:06:14.165 "params": { 00:06:14.165 "trtype": "pcie", 00:06:14.165 "traddr": "0000:00:10.0", 00:06:14.165 "name": "Nvme0" 00:06:14.165 }, 00:06:14.165 "method": "bdev_nvme_attach_controller" 00:06:14.165 }, 00:06:14.165 { 00:06:14.165 "method": "bdev_wait_for_examine" 00:06:14.165 } 00:06:14.165 ] 00:06:14.165 } 00:06:14.165 ] 00:06:14.165 } 00:06:14.424 [2024-07-26 07:31:39.886947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.424 [2024-07-26 07:31:40.023680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.682 [2024-07-26 07:31:40.103398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.941  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.941 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:14.941 07:31:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.507 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:15.507 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:15.507 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.507 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.507 [2024-07-26 07:31:41.099363] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:15.507 [2024-07-26 07:31:41.099456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62076 ] 00:06:15.766 { 00:06:15.766 "subsystems": [ 00:06:15.766 { 00:06:15.766 "subsystem": "bdev", 00:06:15.766 "config": [ 00:06:15.766 { 00:06:15.766 "params": { 00:06:15.766 "trtype": "pcie", 00:06:15.766 "traddr": "0000:00:10.0", 00:06:15.766 "name": "Nvme0" 00:06:15.766 }, 00:06:15.766 "method": "bdev_nvme_attach_controller" 00:06:15.766 }, 00:06:15.766 { 00:06:15.766 "method": "bdev_wait_for_examine" 00:06:15.766 } 00:06:15.766 ] 00:06:15.766 } 00:06:15.767 ] 00:06:15.767 } 00:06:15.767 [2024-07-26 07:31:41.233678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.025 [2024-07-26 07:31:41.377405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.025 [2024-07-26 07:31:41.460721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.593  Copying: 48/48 [kB] (average 46 MBps) 00:06:16.593 00:06:16.593 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:16.593 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:16.593 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.593 07:31:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.593 [2024-07-26 07:31:41.966166] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:16.593 [2024-07-26 07:31:41.966278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62095 ] 00:06:16.593 { 00:06:16.593 "subsystems": [ 00:06:16.593 { 00:06:16.593 "subsystem": "bdev", 00:06:16.593 "config": [ 00:06:16.593 { 00:06:16.593 "params": { 00:06:16.593 "trtype": "pcie", 00:06:16.593 "traddr": "0000:00:10.0", 00:06:16.593 "name": "Nvme0" 00:06:16.593 }, 00:06:16.593 "method": "bdev_nvme_attach_controller" 00:06:16.593 }, 00:06:16.593 { 00:06:16.593 "method": "bdev_wait_for_examine" 00:06:16.593 } 00:06:16.593 ] 00:06:16.593 } 00:06:16.593 ] 00:06:16.593 } 00:06:16.593 [2024-07-26 07:31:42.106216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.852 [2024-07-26 07:31:42.240870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.852 [2024-07-26 07:31:42.322043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.420  Copying: 48/48 [kB] (average 46 MBps) 00:06:17.420 00:06:17.420 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.420 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:17.420 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.420 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.421 07:31:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.421 [2024-07-26 07:31:42.812385] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:17.421 [2024-07-26 07:31:42.812512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62116 ] 00:06:17.421 { 00:06:17.421 "subsystems": [ 00:06:17.421 { 00:06:17.421 "subsystem": "bdev", 00:06:17.421 "config": [ 00:06:17.421 { 00:06:17.421 "params": { 00:06:17.421 "trtype": "pcie", 00:06:17.421 "traddr": "0000:00:10.0", 00:06:17.421 "name": "Nvme0" 00:06:17.421 }, 00:06:17.421 "method": "bdev_nvme_attach_controller" 00:06:17.421 }, 00:06:17.421 { 00:06:17.421 "method": "bdev_wait_for_examine" 00:06:17.421 } 00:06:17.421 ] 00:06:17.421 } 00:06:17.421 ] 00:06:17.421 } 00:06:17.421 [2024-07-26 07:31:42.951694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.683 [2024-07-26 07:31:43.083374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.683 [2024-07-26 07:31:43.162025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.251  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:18.251 00:06:18.251 00:06:18.251 real 0m18.036s 00:06:18.251 user 0m13.173s 00:06:18.251 sys 0m7.281s 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.251 ************************************ 00:06:18.251 END TEST dd_rw 00:06:18.251 ************************************ 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.251 ************************************ 00:06:18.251 START TEST dd_rw_offset 00:06:18.251 ************************************ 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:18.251 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:18.252 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=x77420d1euav5u9ofrdbyu8k8w7fsqqoagya4qwhe669e2nzd85xmxes6rx17r50j9uhuueyk59mslbrdwqeq2sihtruebybvkbvj8soh5zpodgd6r188ozks9zw5cgf4p4uijkrisf66hkcg0suxyc5t6pee4tnr7nb7acoxnaykriwm1v81wtd5vsfy92hydpg5m2b74bdkswfoln1ta08ndu4b5gcadhivgxraltc5giy3nq0otk1r9lacmyy1hj4wt8so230pq29p5xjlep6trst15vc1gxv1cbeauzp75pmeqys9wjta8zahug0p7ce7n757kbj6cxwehha8213o38huddya43go820tqerrtxykpsh5ekr1eekui5541yju1d0hjwb7xo6ugwllj5r40o3etsy65s73vxjq3hkob5kcagw38ycwbsd7n8vcd6sdh0z70wc84i7j7rp4y7fqcn01rdsd6tnc5k43v0flceofse9gupa1kzurqffnugfbo6nbgemutp3977j4t4euyehxwm5b8l2e5dwlfpu0jcxcjqykzh41zagpy60rywif547eehhsx00ndxhxnjx7lrhn1wv529s7nptsohgrd3tottg9gm82e3lyqaqzlvemmxzobdeyfywshtsj32avgcabwzbk5hsf9iwwrcduqloq5snlm77cmm68stw21pjmog6r0hucuokej3m75qe0hnlngz8gjc7pe96gqs8jdjvwjh4anyr64if1oeer60epb9zp0sija3200vaiwq59ywfrkfuqhm4feyc7pvtixh80ldzxy17w7u1rsuifd2pow4mh056fphwvvudrpdezoqpihbn6duvgff1htwcoofe8s9uzmo4n6dqw1qbjekvl4kl0dzmk8maze2bq37f8jo6i5xhli2slk0i1krkevc5kxwki4aru8ncr3dxvhz84u3bn9qq7ktvnjmut6bic9vuzvikf42ukrjhhdsyq4q4s3k85xd4h1gespcjwi2lwh1b7wgh64cjdmkfz80plgh264sdblq2etp8lid70fdlpa20k2plx14rrnr4nc5e3j5ulogfk1dihrxfdr41pij3u9b11y756cuelfjoe4nz4xbkyo7gt4h4n9o0i43xf50me2fcn10af9tq9d3tv7eq6pgn6d3q6t0ab66w2xi59kc6aanjgg6rnk2r668w17sgkle212fx2ie1lrm5qm5vt98x92vk17tm6tszeo64jp494q4h8328rgrn4c6n5eswx5ywsl8a8f6vknl3wlox107opkpa8jh1l87v2j7sb63hby7cfj6czlpa03dz6laggt6a6qiifh7etg9aigfavb1eapyl9blimsbbob2h8aw2ouuvgmcfn6jdtil6nk8j3mrfoo568e3upnq69xz0fqpb3qmvbj2pc72xgzugs9rj1xmv1b60nox93k1hd36g2c8ja0qv9c6ag6mhk65v1b79sltlj3zqaerzey3lmy7zmhiy9651vcqqi8rnvkmprv7v1uhkfervx0sb94cw5scuepb884y13o8rblf4nlsuw9unmhmxwz6g8h0muee403dp1p2e89m9ncqhv20a3m4l0k7vh4d7vqbo5hh0bnoh184tb3fb63buppghk089qp71zyeaxnl635bzvcp2l2tc6vv5tgdvc049e6wvzd6tkla98r5znyn0vkatvcrsldstq8lzgl0gbqvrfzlyu4wz1mn3wiv14u4eex83sdtapxd4o8n4pf33nzc5mfcfsm3rp5j2j5m7jbh8xqktckraz3hxx3b770tpuse6iqabd26rrq9d3rvknn9aqhnmjpy8w3s37wsf0p6w3widht70um2av2zngwq6kq615bv8ksdxjkohfj41ilyxnmda3yo2j3c9s59qx5ep6wlwwkfz4dhuj9wihw1mf9awgk2hn4y1axa14zczwla97i6zdvcmuu6n1k01s2fhxjv8zzhzmf9mo6xc4y62ni3yf18822p12aq038ikfzrcbkcn71t08l0ldulv4jkswg8hjdgvdxpoaptf6e1x7pom5at21qag8fb1qh8r0duua1ovlau9owrwrpvmyjszhmsa9p8al5pk6j5mpid1pdhfoun157cnmnsdk5tgw9j4tc6h439n3ztftzpqw5zk5dopv8drc1glrenrnxrow4l5t3tolq8312t3gsantpaix09t4ncd2q32ipjj5d50or7ooehg906bondcuig7vi5c6mmj1f293akbww8qqrhyrksdh3jdvx037l65u0i3whj5hgdaops90bof5poo16h8h7w1vug3e9lmjbd0ety9pza8txfyujfb7652qlj4b6ch557gyyah1qt5716jslbvbp240w0uvz34qf723g11ppp19nbmfkhrdrh30zjsaq5yqpzfunr1l459qfb5f8jlsvy5nm13zqpd8axiqrp0d4wersu00l030e6emqxw6mharltpkdgjn33ron7buzj93zx3d1rl4a4tnpj1hbblv0nch1qjz8k06q55mz1b1h6w35r9pw1189rbj1s132zkmg8edrjvyk258vm8ekmrzxi7a29t5qgbhny66undnw3nn937jih6qe57qx38i56txmgnp902anjsg7hic57nnk7wn2568xp36k4c5lh8qirxphgnceq4tvvbsavvnesgkxmzyphcx6jt02rb9svyrtrlnx1fquaxt8kua8li2qtzxwikdeb2zv2l9rf6l6dedc1sol7wjxh2mqsej62ow5xpffj8a9jazgn3hi9azi8xo7cneo4ajuk78np3me64kv03h8u5xtu5c4u2hx76fqf3ttvp6xntk3eaha6slql6b16rkcm5zdps0ci11k164r5iz96q0cs329nenzk7dfa54sav11bbnysxxjyp3zhv3a0xnaffbqgl2hn7p60q9ibvge7b6nof7u9fw36d6pjtl2ld7aijtbmo1bp51jswrliwfnu8jwt0hnbfgk8fnhfh85tt9axlhl8lg93apzoijxy7stje5n4gy7p9mnwdg21o04h68hqr33kw08gm9qgewqh8qzjbmbmnifx2emu3mlpjxf4833sdstr88r7m8z170hyw6ieoum4o96shuanv2y7eymnkc43v1je5mp6bw813egbek1ffuewip3o5rjsosnqho4px6nn7dlznelhpx6hg653ieeidi8edtddor90rnxf4p69hqa3bj7z7svout4x8eisa5kpf82jjiqihwq2rzrvc51uvh3w6miimqr5kfmzc8afepwt48p3p3752j6k8nspvqq79qzcz0ws7jp86dfr7vnfe90r6zb6gzua4gjfyvwv0y4cuoixbbc92tcu23io03l56vefz1oyomogkwree22qqz0jhpev7vv8vdg2i7744n8644f5hxomm7lv0wx4kzbodyvugzsvuzcbrohfefi98vy9kfsii9jibg0sqcp1x25a1ldu42nla6597xl68l4pn4zhh8r88p158quijltxv27ipvtsekq9ttkdnazkwv8u7qxya38sfp8fqnuuptxjp71i0cr5vul8ygj6bbvu5ukjj1uehh0kckwpjlah7gsws7rfcltsdmy7raqgv19jgv6i7zwqcc35hespxrlkpmce96lwrswuxzhqg9ecqbam7vkf54zutdy1sjg76jwul0655c84edg548vmwqj8ak9mp0apf8ermay67b2cmg0ytlcuop8ncrhxqi2hnyukrhu03gdg8cat770euhf5ladti4b9kcuppvwufe4awi2o26mztkdcyacuyewc4zlm0akpzvottbpooyusjfea5gxjvea70d5ntkeenxiku9jxwxqfdz1cskoeq044bq8zp8eca31xh9f2i65t12dxxk1m06ibzj1lwpi1gd8ditobf9aadfm88wm0as208xh597o38kwg2pd72ix1xc02a7clbewlmbpixmtmwjl5bnty5loe8olpt6813cdauomy974eub4zyxw8a57ftjhdi 00:06:18.252 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:18.252 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:18.252 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:18.252 07:31:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:18.252 [2024-07-26 07:31:43.760996] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:18.252 [2024-07-26 07:31:43.761129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62154 ] 00:06:18.252 { 00:06:18.252 "subsystems": [ 00:06:18.252 { 00:06:18.252 "subsystem": "bdev", 00:06:18.252 "config": [ 00:06:18.252 { 00:06:18.252 "params": { 00:06:18.252 "trtype": "pcie", 00:06:18.252 "traddr": "0000:00:10.0", 00:06:18.252 "name": "Nvme0" 00:06:18.252 }, 00:06:18.252 "method": "bdev_nvme_attach_controller" 00:06:18.252 }, 00:06:18.252 { 00:06:18.252 "method": "bdev_wait_for_examine" 00:06:18.252 } 00:06:18.252 ] 00:06:18.252 } 00:06:18.252 ] 00:06:18.252 } 00:06:18.511 [2024-07-26 07:31:43.902576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.511 [2024-07-26 07:31:44.064575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.769 [2024-07-26 07:31:44.146971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.027  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:19.027 00:06:19.027 07:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:19.027 07:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:19.027 07:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:19.027 07:31:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.285 { 00:06:19.285 "subsystems": [ 00:06:19.285 { 00:06:19.285 "subsystem": "bdev", 00:06:19.285 "config": [ 00:06:19.285 { 00:06:19.285 "params": { 00:06:19.285 "trtype": "pcie", 00:06:19.285 "traddr": "0000:00:10.0", 00:06:19.285 "name": "Nvme0" 00:06:19.285 }, 00:06:19.285 "method": "bdev_nvme_attach_controller" 00:06:19.285 }, 00:06:19.285 { 00:06:19.285 "method": "bdev_wait_for_examine" 00:06:19.285 } 00:06:19.285 ] 00:06:19.285 } 00:06:19.285 ] 00:06:19.285 } 00:06:19.285 [2024-07-26 07:31:44.637994] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:19.285 [2024-07-26 07:31:44.638083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62174 ] 00:06:19.285 [2024-07-26 07:31:44.776077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.544 [2024-07-26 07:31:44.913689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.544 [2024-07-26 07:31:44.996292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.111  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:20.111 00:06:20.111 07:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ x77420d1euav5u9ofrdbyu8k8w7fsqqoagya4qwhe669e2nzd85xmxes6rx17r50j9uhuueyk59mslbrdwqeq2sihtruebybvkbvj8soh5zpodgd6r188ozks9zw5cgf4p4uijkrisf66hkcg0suxyc5t6pee4tnr7nb7acoxnaykriwm1v81wtd5vsfy92hydpg5m2b74bdkswfoln1ta08ndu4b5gcadhivgxraltc5giy3nq0otk1r9lacmyy1hj4wt8so230pq29p5xjlep6trst15vc1gxv1cbeauzp75pmeqys9wjta8zahug0p7ce7n757kbj6cxwehha8213o38huddya43go820tqerrtxykpsh5ekr1eekui5541yju1d0hjwb7xo6ugwllj5r40o3etsy65s73vxjq3hkob5kcagw38ycwbsd7n8vcd6sdh0z70wc84i7j7rp4y7fqcn01rdsd6tnc5k43v0flceofse9gupa1kzurqffnugfbo6nbgemutp3977j4t4euyehxwm5b8l2e5dwlfpu0jcxcjqykzh41zagpy60rywif547eehhsx00ndxhxnjx7lrhn1wv529s7nptsohgrd3tottg9gm82e3lyqaqzlvemmxzobdeyfywshtsj32avgcabwzbk5hsf9iwwrcduqloq5snlm77cmm68stw21pjmog6r0hucuokej3m75qe0hnlngz8gjc7pe96gqs8jdjvwjh4anyr64if1oeer60epb9zp0sija3200vaiwq59ywfrkfuqhm4feyc7pvtixh80ldzxy17w7u1rsuifd2pow4mh056fphwvvudrpdezoqpihbn6duvgff1htwcoofe8s9uzmo4n6dqw1qbjekvl4kl0dzmk8maze2bq37f8jo6i5xhli2slk0i1krkevc5kxwki4aru8ncr3dxvhz84u3bn9qq7ktvnjmut6bic9vuzvikf42ukrjhhdsyq4q4s3k85xd4h1gespcjwi2lwh1b7wgh64cjdmkfz80plgh264sdblq2etp8lid70fdlpa20k2plx14rrnr4nc5e3j5ulogfk1dihrxfdr41pij3u9b11y756cuelfjoe4nz4xbkyo7gt4h4n9o0i43xf50me2fcn10af9tq9d3tv7eq6pgn6d3q6t0ab66w2xi59kc6aanjgg6rnk2r668w17sgkle212fx2ie1lrm5qm5vt98x92vk17tm6tszeo64jp494q4h8328rgrn4c6n5eswx5ywsl8a8f6vknl3wlox107opkpa8jh1l87v2j7sb63hby7cfj6czlpa03dz6laggt6a6qiifh7etg9aigfavb1eapyl9blimsbbob2h8aw2ouuvgmcfn6jdtil6nk8j3mrfoo568e3upnq69xz0fqpb3qmvbj2pc72xgzugs9rj1xmv1b60nox93k1hd36g2c8ja0qv9c6ag6mhk65v1b79sltlj3zqaerzey3lmy7zmhiy9651vcqqi8rnvkmprv7v1uhkfervx0sb94cw5scuepb884y13o8rblf4nlsuw9unmhmxwz6g8h0muee403dp1p2e89m9ncqhv20a3m4l0k7vh4d7vqbo5hh0bnoh184tb3fb63buppghk089qp71zyeaxnl635bzvcp2l2tc6vv5tgdvc049e6wvzd6tkla98r5znyn0vkatvcrsldstq8lzgl0gbqvrfzlyu4wz1mn3wiv14u4eex83sdtapxd4o8n4pf33nzc5mfcfsm3rp5j2j5m7jbh8xqktckraz3hxx3b770tpuse6iqabd26rrq9d3rvknn9aqhnmjpy8w3s37wsf0p6w3widht70um2av2zngwq6kq615bv8ksdxjkohfj41ilyxnmda3yo2j3c9s59qx5ep6wlwwkfz4dhuj9wihw1mf9awgk2hn4y1axa14zczwla97i6zdvcmuu6n1k01s2fhxjv8zzhzmf9mo6xc4y62ni3yf18822p12aq038ikfzrcbkcn71t08l0ldulv4jkswg8hjdgvdxpoaptf6e1x7pom5at21qag8fb1qh8r0duua1ovlau9owrwrpvmyjszhmsa9p8al5pk6j5mpid1pdhfoun157cnmnsdk5tgw9j4tc6h439n3ztftzpqw5zk5dopv8drc1glrenrnxrow4l5t3tolq8312t3gsantpaix09t4ncd2q32ipjj5d50or7ooehg906bondcuig7vi5c6mmj1f293akbww8qqrhyrksdh3jdvx037l65u0i3whj5hgdaops90bof5poo16h8h7w1vug3e9lmjbd0ety9pza8txfyujfb7652qlj4b6ch557gyyah1qt5716jslbvbp240w0uvz34qf723g11ppp19nbmfkhrdrh30zjsaq5yqpzfunr1l459qfb5f8jlsvy5nm13zqpd8axiqrp0d4wersu00l030e6emqxw6mharltpkdgjn33ron7buzj93zx3d1rl4a4tnpj1hbblv0nch1qjz8k06q55mz1b1h6w35r9pw1189rbj1s132zkmg8edrjvyk258vm8ekmrzxi7a29t5qgbhny66undnw3nn937jih6qe57qx38i56txmgnp902anjsg7hic57nnk7wn2568xp36k4c5lh8qirxphgnceq4tvvbsavvnesgkxmzyphcx6jt02rb9svyrtrlnx1fquaxt8kua8li2qtzxwikdeb2zv2l9rf6l6dedc1sol7wjxh2mqsej62ow5xpffj8a9jazgn3hi9azi8xo7cneo4ajuk78np3me64kv03h8u5xtu5c4u2hx76fqf3ttvp6xntk3eaha6slql6b16rkcm5zdps0ci11k164r5iz96q0cs329nenzk7dfa54sav11bbnysxxjyp3zhv3a0xnaffbqgl2hn7p60q9ibvge7b6nof7u9fw36d6pjtl2ld7aijtbmo1bp51jswrliwfnu8jwt0hnbfgk8fnhfh85tt9axlhl8lg93apzoijxy7stje5n4gy7p9mnwdg21o04h68hqr33kw08gm9qgewqh8qzjbmbmnifx2emu3mlpjxf4833sdstr88r7m8z170hyw6ieoum4o96shuanv2y7eymnkc43v1je5mp6bw813egbek1ffuewip3o5rjsosnqho4px6nn7dlznelhpx6hg653ieeidi8edtddor90rnxf4p69hqa3bj7z7svout4x8eisa5kpf82jjiqihwq2rzrvc51uvh3w6miimqr5kfmzc8afepwt48p3p3752j6k8nspvqq79qzcz0ws7jp86dfr7vnfe90r6zb6gzua4gjfyvwv0y4cuoixbbc92tcu23io03l56vefz1oyomogkwree22qqz0jhpev7vv8vdg2i7744n8644f5hxomm7lv0wx4kzbodyvugzsvuzcbrohfefi98vy9kfsii9jibg0sqcp1x25a1ldu42nla6597xl68l4pn4zhh8r88p158quijltxv27ipvtsekq9ttkdnazkwv8u7qxya38sfp8fqnuuptxjp71i0cr5vul8ygj6bbvu5ukjj1uehh0kckwpjlah7gsws7rfcltsdmy7raqgv19jgv6i7zwqcc35hespxrlkpmce96lwrswuxzhqg9ecqbam7vkf54zutdy1sjg76jwul0655c84edg548vmwqj8ak9mp0apf8ermay67b2cmg0ytlcuop8ncrhxqi2hnyukrhu03gdg8cat770euhf5ladti4b9kcuppvwufe4awi2o26mztkdcyacuyewc4zlm0akpzvottbpooyusjfea5gxjvea70d5ntkeenxiku9jxwxqfdz1cskoeq044bq8zp8eca31xh9f2i65t12dxxk1m06ibzj1lwpi1gd8ditobf9aadfm88wm0as208xh597o38kwg2pd72ix1xc02a7clbewlmbpixmtmwjl5bnty5loe8olpt6813cdauomy974eub4zyxw8a57ftjhdi == \x\7\7\4\2\0\d\1\e\u\a\v\5\u\9\o\f\r\d\b\y\u\8\k\8\w\7\f\s\q\q\o\a\g\y\a\4\q\w\h\e\6\6\9\e\2\n\z\d\8\5\x\m\x\e\s\6\r\x\1\7\r\5\0\j\9\u\h\u\u\e\y\k\5\9\m\s\l\b\r\d\w\q\e\q\2\s\i\h\t\r\u\e\b\y\b\v\k\b\v\j\8\s\o\h\5\z\p\o\d\g\d\6\r\1\8\8\o\z\k\s\9\z\w\5\c\g\f\4\p\4\u\i\j\k\r\i\s\f\6\6\h\k\c\g\0\s\u\x\y\c\5\t\6\p\e\e\4\t\n\r\7\n\b\7\a\c\o\x\n\a\y\k\r\i\w\m\1\v\8\1\w\t\d\5\v\s\f\y\9\2\h\y\d\p\g\5\m\2\b\7\4\b\d\k\s\w\f\o\l\n\1\t\a\0\8\n\d\u\4\b\5\g\c\a\d\h\i\v\g\x\r\a\l\t\c\5\g\i\y\3\n\q\0\o\t\k\1\r\9\l\a\c\m\y\y\1\h\j\4\w\t\8\s\o\2\3\0\p\q\2\9\p\5\x\j\l\e\p\6\t\r\s\t\1\5\v\c\1\g\x\v\1\c\b\e\a\u\z\p\7\5\p\m\e\q\y\s\9\w\j\t\a\8\z\a\h\u\g\0\p\7\c\e\7\n\7\5\7\k\b\j\6\c\x\w\e\h\h\a\8\2\1\3\o\3\8\h\u\d\d\y\a\4\3\g\o\8\2\0\t\q\e\r\r\t\x\y\k\p\s\h\5\e\k\r\1\e\e\k\u\i\5\5\4\1\y\j\u\1\d\0\h\j\w\b\7\x\o\6\u\g\w\l\l\j\5\r\4\0\o\3\e\t\s\y\6\5\s\7\3\v\x\j\q\3\h\k\o\b\5\k\c\a\g\w\3\8\y\c\w\b\s\d\7\n\8\v\c\d\6\s\d\h\0\z\7\0\w\c\8\4\i\7\j\7\r\p\4\y\7\f\q\c\n\0\1\r\d\s\d\6\t\n\c\5\k\4\3\v\0\f\l\c\e\o\f\s\e\9\g\u\p\a\1\k\z\u\r\q\f\f\n\u\g\f\b\o\6\n\b\g\e\m\u\t\p\3\9\7\7\j\4\t\4\e\u\y\e\h\x\w\m\5\b\8\l\2\e\5\d\w\l\f\p\u\0\j\c\x\c\j\q\y\k\z\h\4\1\z\a\g\p\y\6\0\r\y\w\i\f\5\4\7\e\e\h\h\s\x\0\0\n\d\x\h\x\n\j\x\7\l\r\h\n\1\w\v\5\2\9\s\7\n\p\t\s\o\h\g\r\d\3\t\o\t\t\g\9\g\m\8\2\e\3\l\y\q\a\q\z\l\v\e\m\m\x\z\o\b\d\e\y\f\y\w\s\h\t\s\j\3\2\a\v\g\c\a\b\w\z\b\k\5\h\s\f\9\i\w\w\r\c\d\u\q\l\o\q\5\s\n\l\m\7\7\c\m\m\6\8\s\t\w\2\1\p\j\m\o\g\6\r\0\h\u\c\u\o\k\e\j\3\m\7\5\q\e\0\h\n\l\n\g\z\8\g\j\c\7\p\e\9\6\g\q\s\8\j\d\j\v\w\j\h\4\a\n\y\r\6\4\i\f\1\o\e\e\r\6\0\e\p\b\9\z\p\0\s\i\j\a\3\2\0\0\v\a\i\w\q\5\9\y\w\f\r\k\f\u\q\h\m\4\f\e\y\c\7\p\v\t\i\x\h\8\0\l\d\z\x\y\1\7\w\7\u\1\r\s\u\i\f\d\2\p\o\w\4\m\h\0\5\6\f\p\h\w\v\v\u\d\r\p\d\e\z\o\q\p\i\h\b\n\6\d\u\v\g\f\f\1\h\t\w\c\o\o\f\e\8\s\9\u\z\m\o\4\n\6\d\q\w\1\q\b\j\e\k\v\l\4\k\l\0\d\z\m\k\8\m\a\z\e\2\b\q\3\7\f\8\j\o\6\i\5\x\h\l\i\2\s\l\k\0\i\1\k\r\k\e\v\c\5\k\x\w\k\i\4\a\r\u\8\n\c\r\3\d\x\v\h\z\8\4\u\3\b\n\9\q\q\7\k\t\v\n\j\m\u\t\6\b\i\c\9\v\u\z\v\i\k\f\4\2\u\k\r\j\h\h\d\s\y\q\4\q\4\s\3\k\8\5\x\d\4\h\1\g\e\s\p\c\j\w\i\2\l\w\h\1\b\7\w\g\h\6\4\c\j\d\m\k\f\z\8\0\p\l\g\h\2\6\4\s\d\b\l\q\2\e\t\p\8\l\i\d\7\0\f\d\l\p\a\2\0\k\2\p\l\x\1\4\r\r\n\r\4\n\c\5\e\3\j\5\u\l\o\g\f\k\1\d\i\h\r\x\f\d\r\4\1\p\i\j\3\u\9\b\1\1\y\7\5\6\c\u\e\l\f\j\o\e\4\n\z\4\x\b\k\y\o\7\g\t\4\h\4\n\9\o\0\i\4\3\x\f\5\0\m\e\2\f\c\n\1\0\a\f\9\t\q\9\d\3\t\v\7\e\q\6\p\g\n\6\d\3\q\6\t\0\a\b\6\6\w\2\x\i\5\9\k\c\6\a\a\n\j\g\g\6\r\n\k\2\r\6\6\8\w\1\7\s\g\k\l\e\2\1\2\f\x\2\i\e\1\l\r\m\5\q\m\5\v\t\9\8\x\9\2\v\k\1\7\t\m\6\t\s\z\e\o\6\4\j\p\4\9\4\q\4\h\8\3\2\8\r\g\r\n\4\c\6\n\5\e\s\w\x\5\y\w\s\l\8\a\8\f\6\v\k\n\l\3\w\l\o\x\1\0\7\o\p\k\p\a\8\j\h\1\l\8\7\v\2\j\7\s\b\6\3\h\b\y\7\c\f\j\6\c\z\l\p\a\0\3\d\z\6\l\a\g\g\t\6\a\6\q\i\i\f\h\7\e\t\g\9\a\i\g\f\a\v\b\1\e\a\p\y\l\9\b\l\i\m\s\b\b\o\b\2\h\8\a\w\2\o\u\u\v\g\m\c\f\n\6\j\d\t\i\l\6\n\k\8\j\3\m\r\f\o\o\5\6\8\e\3\u\p\n\q\6\9\x\z\0\f\q\p\b\3\q\m\v\b\j\2\p\c\7\2\x\g\z\u\g\s\9\r\j\1\x\m\v\1\b\6\0\n\o\x\9\3\k\1\h\d\3\6\g\2\c\8\j\a\0\q\v\9\c\6\a\g\6\m\h\k\6\5\v\1\b\7\9\s\l\t\l\j\3\z\q\a\e\r\z\e\y\3\l\m\y\7\z\m\h\i\y\9\6\5\1\v\c\q\q\i\8\r\n\v\k\m\p\r\v\7\v\1\u\h\k\f\e\r\v\x\0\s\b\9\4\c\w\5\s\c\u\e\p\b\8\8\4\y\1\3\o\8\r\b\l\f\4\n\l\s\u\w\9\u\n\m\h\m\x\w\z\6\g\8\h\0\m\u\e\e\4\0\3\d\p\1\p\2\e\8\9\m\9\n\c\q\h\v\2\0\a\3\m\4\l\0\k\7\v\h\4\d\7\v\q\b\o\5\h\h\0\b\n\o\h\1\8\4\t\b\3\f\b\6\3\b\u\p\p\g\h\k\0\8\9\q\p\7\1\z\y\e\a\x\n\l\6\3\5\b\z\v\c\p\2\l\2\t\c\6\v\v\5\t\g\d\v\c\0\4\9\e\6\w\v\z\d\6\t\k\l\a\9\8\r\5\z\n\y\n\0\v\k\a\t\v\c\r\s\l\d\s\t\q\8\l\z\g\l\0\g\b\q\v\r\f\z\l\y\u\4\w\z\1\m\n\3\w\i\v\1\4\u\4\e\e\x\8\3\s\d\t\a\p\x\d\4\o\8\n\4\p\f\3\3\n\z\c\5\m\f\c\f\s\m\3\r\p\5\j\2\j\5\m\7\j\b\h\8\x\q\k\t\c\k\r\a\z\3\h\x\x\3\b\7\7\0\t\p\u\s\e\6\i\q\a\b\d\2\6\r\r\q\9\d\3\r\v\k\n\n\9\a\q\h\n\m\j\p\y\8\w\3\s\3\7\w\s\f\0\p\6\w\3\w\i\d\h\t\7\0\u\m\2\a\v\2\z\n\g\w\q\6\k\q\6\1\5\b\v\8\k\s\d\x\j\k\o\h\f\j\4\1\i\l\y\x\n\m\d\a\3\y\o\2\j\3\c\9\s\5\9\q\x\5\e\p\6\w\l\w\w\k\f\z\4\d\h\u\j\9\w\i\h\w\1\m\f\9\a\w\g\k\2\h\n\4\y\1\a\x\a\1\4\z\c\z\w\l\a\9\7\i\6\z\d\v\c\m\u\u\6\n\1\k\0\1\s\2\f\h\x\j\v\8\z\z\h\z\m\f\9\m\o\6\x\c\4\y\6\2\n\i\3\y\f\1\8\8\2\2\p\1\2\a\q\0\3\8\i\k\f\z\r\c\b\k\c\n\7\1\t\0\8\l\0\l\d\u\l\v\4\j\k\s\w\g\8\h\j\d\g\v\d\x\p\o\a\p\t\f\6\e\1\x\7\p\o\m\5\a\t\2\1\q\a\g\8\f\b\1\q\h\8\r\0\d\u\u\a\1\o\v\l\a\u\9\o\w\r\w\r\p\v\m\y\j\s\z\h\m\s\a\9\p\8\a\l\5\p\k\6\j\5\m\p\i\d\1\p\d\h\f\o\u\n\1\5\7\c\n\m\n\s\d\k\5\t\g\w\9\j\4\t\c\6\h\4\3\9\n\3\z\t\f\t\z\p\q\w\5\z\k\5\d\o\p\v\8\d\r\c\1\g\l\r\e\n\r\n\x\r\o\w\4\l\5\t\3\t\o\l\q\8\3\1\2\t\3\g\s\a\n\t\p\a\i\x\0\9\t\4\n\c\d\2\q\3\2\i\p\j\j\5\d\5\0\o\r\7\o\o\e\h\g\9\0\6\b\o\n\d\c\u\i\g\7\v\i\5\c\6\m\m\j\1\f\2\9\3\a\k\b\w\w\8\q\q\r\h\y\r\k\s\d\h\3\j\d\v\x\0\3\7\l\6\5\u\0\i\3\w\h\j\5\h\g\d\a\o\p\s\9\0\b\o\f\5\p\o\o\1\6\h\8\h\7\w\1\v\u\g\3\e\9\l\m\j\b\d\0\e\t\y\9\p\z\a\8\t\x\f\y\u\j\f\b\7\6\5\2\q\l\j\4\b\6\c\h\5\5\7\g\y\y\a\h\1\q\t\5\7\1\6\j\s\l\b\v\b\p\2\4\0\w\0\u\v\z\3\4\q\f\7\2\3\g\1\1\p\p\p\1\9\n\b\m\f\k\h\r\d\r\h\3\0\z\j\s\a\q\5\y\q\p\z\f\u\n\r\1\l\4\5\9\q\f\b\5\f\8\j\l\s\v\y\5\n\m\1\3\z\q\p\d\8\a\x\i\q\r\p\0\d\4\w\e\r\s\u\0\0\l\0\3\0\e\6\e\m\q\x\w\6\m\h\a\r\l\t\p\k\d\g\j\n\3\3\r\o\n\7\b\u\z\j\9\3\z\x\3\d\1\r\l\4\a\4\t\n\p\j\1\h\b\b\l\v\0\n\c\h\1\q\j\z\8\k\0\6\q\5\5\m\z\1\b\1\h\6\w\3\5\r\9\p\w\1\1\8\9\r\b\j\1\s\1\3\2\z\k\m\g\8\e\d\r\j\v\y\k\2\5\8\v\m\8\e\k\m\r\z\x\i\7\a\2\9\t\5\q\g\b\h\n\y\6\6\u\n\d\n\w\3\n\n\9\3\7\j\i\h\6\q\e\5\7\q\x\3\8\i\5\6\t\x\m\g\n\p\9\0\2\a\n\j\s\g\7\h\i\c\5\7\n\n\k\7\w\n\2\5\6\8\x\p\3\6\k\4\c\5\l\h\8\q\i\r\x\p\h\g\n\c\e\q\4\t\v\v\b\s\a\v\v\n\e\s\g\k\x\m\z\y\p\h\c\x\6\j\t\0\2\r\b\9\s\v\y\r\t\r\l\n\x\1\f\q\u\a\x\t\8\k\u\a\8\l\i\2\q\t\z\x\w\i\k\d\e\b\2\z\v\2\l\9\r\f\6\l\6\d\e\d\c\1\s\o\l\7\w\j\x\h\2\m\q\s\e\j\6\2\o\w\5\x\p\f\f\j\8\a\9\j\a\z\g\n\3\h\i\9\a\z\i\8\x\o\7\c\n\e\o\4\a\j\u\k\7\8\n\p\3\m\e\6\4\k\v\0\3\h\8\u\5\x\t\u\5\c\4\u\2\h\x\7\6\f\q\f\3\t\t\v\p\6\x\n\t\k\3\e\a\h\a\6\s\l\q\l\6\b\1\6\r\k\c\m\5\z\d\p\s\0\c\i\1\1\k\1\6\4\r\5\i\z\9\6\q\0\c\s\3\2\9\n\e\n\z\k\7\d\f\a\5\4\s\a\v\1\1\b\b\n\y\s\x\x\j\y\p\3\z\h\v\3\a\0\x\n\a\f\f\b\q\g\l\2\h\n\7\p\6\0\q\9\i\b\v\g\e\7\b\6\n\o\f\7\u\9\f\w\3\6\d\6\p\j\t\l\2\l\d\7\a\i\j\t\b\m\o\1\b\p\5\1\j\s\w\r\l\i\w\f\n\u\8\j\w\t\0\h\n\b\f\g\k\8\f\n\h\f\h\8\5\t\t\9\a\x\l\h\l\8\l\g\9\3\a\p\z\o\i\j\x\y\7\s\t\j\e\5\n\4\g\y\7\p\9\m\n\w\d\g\2\1\o\0\4\h\6\8\h\q\r\3\3\k\w\0\8\g\m\9\q\g\e\w\q\h\8\q\z\j\b\m\b\m\n\i\f\x\2\e\m\u\3\m\l\p\j\x\f\4\8\3\3\s\d\s\t\r\8\8\r\7\m\8\z\1\7\0\h\y\w\6\i\e\o\u\m\4\o\9\6\s\h\u\a\n\v\2\y\7\e\y\m\n\k\c\4\3\v\1\j\e\5\m\p\6\b\w\8\1\3\e\g\b\e\k\1\f\f\u\e\w\i\p\3\o\5\r\j\s\o\s\n\q\h\o\4\p\x\6\n\n\7\d\l\z\n\e\l\h\p\x\6\h\g\6\5\3\i\e\e\i\d\i\8\e\d\t\d\d\o\r\9\0\r\n\x\f\4\p\6\9\h\q\a\3\b\j\7\z\7\s\v\o\u\t\4\x\8\e\i\s\a\5\k\p\f\8\2\j\j\i\q\i\h\w\q\2\r\z\r\v\c\5\1\u\v\h\3\w\6\m\i\i\m\q\r\5\k\f\m\z\c\8\a\f\e\p\w\t\4\8\p\3\p\3\7\5\2\j\6\k\8\n\s\p\v\q\q\7\9\q\z\c\z\0\w\s\7\j\p\8\6\d\f\r\7\v\n\f\e\9\0\r\6\z\b\6\g\z\u\a\4\g\j\f\y\v\w\v\0\y\4\c\u\o\i\x\b\b\c\9\2\t\c\u\2\3\i\o\0\3\l\5\6\v\e\f\z\1\o\y\o\m\o\g\k\w\r\e\e\2\2\q\q\z\0\j\h\p\e\v\7\v\v\8\v\d\g\2\i\7\7\4\4\n\8\6\4\4\f\5\h\x\o\m\m\7\l\v\0\w\x\4\k\z\b\o\d\y\v\u\g\z\s\v\u\z\c\b\r\o\h\f\e\f\i\9\8\v\y\9\k\f\s\i\i\9\j\i\b\g\0\s\q\c\p\1\x\2\5\a\1\l\d\u\4\2\n\l\a\6\5\9\7\x\l\6\8\l\4\p\n\4\z\h\h\8\r\8\8\p\1\5\8\q\u\i\j\l\t\x\v\2\7\i\p\v\t\s\e\k\q\9\t\t\k\d\n\a\z\k\w\v\8\u\7\q\x\y\a\3\8\s\f\p\8\f\q\n\u\u\p\t\x\j\p\7\1\i\0\c\r\5\v\u\l\8\y\g\j\6\b\b\v\u\5\u\k\j\j\1\u\e\h\h\0\k\c\k\w\p\j\l\a\h\7\g\s\w\s\7\r\f\c\l\t\s\d\m\y\7\r\a\q\g\v\1\9\j\g\v\6\i\7\z\w\q\c\c\3\5\h\e\s\p\x\r\l\k\p\m\c\e\9\6\l\w\r\s\w\u\x\z\h\q\g\9\e\c\q\b\a\m\7\v\k\f\5\4\z\u\t\d\y\1\s\j\g\7\6\j\w\u\l\0\6\5\5\c\8\4\e\d\g\5\4\8\v\m\w\q\j\8\a\k\9\m\p\0\a\p\f\8\e\r\m\a\y\6\7\b\2\c\m\g\0\y\t\l\c\u\o\p\8\n\c\r\h\x\q\i\2\h\n\y\u\k\r\h\u\0\3\g\d\g\8\c\a\t\7\7\0\e\u\h\f\5\l\a\d\t\i\4\b\9\k\c\u\p\p\v\w\u\f\e\4\a\w\i\2\o\2\6\m\z\t\k\d\c\y\a\c\u\y\e\w\c\4\z\l\m\0\a\k\p\z\v\o\t\t\b\p\o\o\y\u\s\j\f\e\a\5\g\x\j\v\e\a\7\0\d\5\n\t\k\e\e\n\x\i\k\u\9\j\x\w\x\q\f\d\z\1\c\s\k\o\e\q\0\4\4\b\q\8\z\p\8\e\c\a\3\1\x\h\9\f\2\i\6\5\t\1\2\d\x\x\k\1\m\0\6\i\b\z\j\1\l\w\p\i\1\g\d\8\d\i\t\o\b\f\9\a\a\d\f\m\8\8\w\m\0\a\s\2\0\8\x\h\5\9\7\o\3\8\k\w\g\2\p\d\7\2\i\x\1\x\c\0\2\a\7\c\l\b\e\w\l\m\b\p\i\x\m\t\m\w\j\l\5\b\n\t\y\5\l\o\e\8\o\l\p\t\6\8\1\3\c\d\a\u\o\m\y\9\7\4\e\u\b\4\z\y\x\w\8\a\5\7\f\t\j\h\d\i ]] 00:06:20.112 00:06:20.112 real 0m1.797s 00:06:20.112 user 0m1.246s 00:06:20.112 sys 0m0.846s 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:20.112 ************************************ 00:06:20.112 END TEST dd_rw_offset 00:06:20.112 ************************************ 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.112 07:31:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.112 { 00:06:20.112 "subsystems": [ 00:06:20.112 { 00:06:20.112 "subsystem": "bdev", 00:06:20.112 "config": [ 00:06:20.112 { 00:06:20.112 "params": { 00:06:20.112 "trtype": "pcie", 00:06:20.112 "traddr": "0000:00:10.0", 00:06:20.112 "name": "Nvme0" 00:06:20.112 }, 00:06:20.112 "method": "bdev_nvme_attach_controller" 00:06:20.112 }, 00:06:20.112 { 00:06:20.112 "method": "bdev_wait_for_examine" 00:06:20.112 } 00:06:20.112 ] 00:06:20.112 } 00:06:20.112 ] 00:06:20.112 } 00:06:20.112 [2024-07-26 07:31:45.543053] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:20.112 [2024-07-26 07:31:45.543162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62209 ] 00:06:20.112 [2024-07-26 07:31:45.685932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.371 [2024-07-26 07:31:45.850393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.371 [2024-07-26 07:31:45.934838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.889  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:20.889 00:06:20.889 07:31:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.889 00:06:20.889 real 0m22.005s 00:06:20.889 user 0m15.761s 00:06:20.889 sys 0m8.928s 00:06:20.889 07:31:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.889 07:31:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.889 ************************************ 00:06:20.889 END TEST spdk_dd_basic_rw 00:06:20.889 ************************************ 00:06:20.889 07:31:46 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:20.889 07:31:46 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.889 07:31:46 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.889 07:31:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:20.889 ************************************ 00:06:20.889 START TEST spdk_dd_posix 00:06:20.889 ************************************ 00:06:20.889 07:31:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:21.148 * Looking for test storage... 00:06:21.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:21.148 * First test run, liburing in use 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.148 ************************************ 00:06:21.148 START TEST dd_flag_append 00:06:21.148 ************************************ 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.148 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=i0r4kha2bhc484b8l1l3xf0wbnlpuh02 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=0n1qzdbwbyu5fih74ubm9kmbswyai25l 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s i0r4kha2bhc484b8l1l3xf0wbnlpuh02 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 0n1qzdbwbyu5fih74ubm9kmbswyai25l 00:06:21.149 07:31:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:21.149 [2024-07-26 07:31:46.607015] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:21.149 [2024-07-26 07:31:46.607151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62273 ] 00:06:21.149 [2024-07-26 07:31:46.747316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.407 [2024-07-26 07:31:46.880412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.407 [2024-07-26 07:31:46.965690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.925  Copying: 32/32 [B] (average 31 kBps) 00:06:21.925 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 0n1qzdbwbyu5fih74ubm9kmbswyai25li0r4kha2bhc484b8l1l3xf0wbnlpuh02 == \0\n\1\q\z\d\b\w\b\y\u\5\f\i\h\7\4\u\b\m\9\k\m\b\s\w\y\a\i\2\5\l\i\0\r\4\k\h\a\2\b\h\c\4\8\4\b\8\l\1\l\3\x\f\0\w\b\n\l\p\u\h\0\2 ]] 00:06:21.925 00:06:21.925 real 0m0.787s 00:06:21.925 user 0m0.470s 00:06:21.925 sys 0m0.396s 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.925 ************************************ 00:06:21.925 END TEST dd_flag_append 00:06:21.925 ************************************ 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.925 ************************************ 00:06:21.925 START TEST dd_flag_directory 00:06:21.925 ************************************ 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.925 07:31:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.925 [2024-07-26 07:31:47.454129] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:21.925 [2024-07-26 07:31:47.454279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62302 ] 00:06:22.184 [2024-07-26 07:31:47.599692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.184 [2024-07-26 07:31:47.755285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.442 [2024-07-26 07:31:47.838880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.442 [2024-07-26 07:31:47.887839] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.442 [2024-07-26 07:31:47.887925] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.442 [2024-07-26 07:31:47.887965] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.701 [2024-07-26 07:31:48.062978] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.701 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.701 [2024-07-26 07:31:48.256194] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:22.701 [2024-07-26 07:31:48.256337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62311 ] 00:06:22.960 [2024-07-26 07:31:48.401361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.960 [2024-07-26 07:31:48.509198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.219 [2024-07-26 07:31:48.592007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.219 [2024-07-26 07:31:48.640469] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:23.219 [2024-07-26 07:31:48.640574] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:23.219 [2024-07-26 07:31:48.640618] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.219 [2024-07-26 07:31:48.819575] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.478 00:06:23.478 real 0m1.566s 00:06:23.478 user 0m0.916s 00:06:23.478 sys 0m0.437s 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:23.478 ************************************ 00:06:23.478 END TEST dd_flag_directory 00:06:23.478 ************************************ 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.478 ************************************ 00:06:23.478 START TEST dd_flag_nofollow 00:06:23.478 ************************************ 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:23.478 07:31:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.478 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.478 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.478 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:23.479 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.479 [2024-07-26 07:31:49.063804] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:23.479 [2024-07-26 07:31:49.063931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62345 ] 00:06:23.737 [2024-07-26 07:31:49.201377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.737 [2024-07-26 07:31:49.328476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.994 [2024-07-26 07:31:49.408889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.994 [2024-07-26 07:31:49.457366] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.994 [2024-07-26 07:31:49.457448] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.994 [2024-07-26 07:31:49.457487] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.252 [2024-07-26 07:31:49.633944] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:24.252 07:31:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:24.252 [2024-07-26 07:31:49.818200] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:24.252 [2024-07-26 07:31:49.818299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62360 ] 00:06:24.510 [2024-07-26 07:31:49.951895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.510 [2024-07-26 07:31:50.087340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.767 [2024-07-26 07:31:50.167043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.767 [2024-07-26 07:31:50.215209] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:24.767 [2024-07-26 07:31:50.215307] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:24.767 [2024-07-26 07:31:50.215328] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.025 [2024-07-26 07:31:50.394125] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:25.025 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:25.025 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.025 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:25.025 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:25.025 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:25.026 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.026 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:25.026 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:25.026 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:25.026 07:31:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.026 [2024-07-26 07:31:50.590432] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:25.026 [2024-07-26 07:31:50.590550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62368 ] 00:06:25.284 [2024-07-26 07:31:50.725854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.284 [2024-07-26 07:31:50.861963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.542 [2024-07-26 07:31:50.942859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.800  Copying: 512/512 [B] (average 500 kBps) 00:06:25.800 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ u85rhhnz3b9uqhn8z4eradtj4fm8xqoab25dd1wnzuov7a7dfdjtbbxsj5xx8bd0s5d4tpk9m8m056cuu7w10wn8k7i7p0ibesftzaykc4ta4gktuvyfevcyyiiiliiz1xaecnfo4eai6787reavv603v3wxp60dg2lc01skizunum56a8bh5j1q18varg5z32ajd8bui1tvoibfa3wp6sntoykbcrglg5qzvwgrvp9r6am0ws6xx57846bx4yjlfd5ra3yw5jrefo9p3d0s97j7fm68sjucxuhaf10ohwa1c3spqbncd1jkp6o4s6fr26ppctzx7vis8tt71v1sfraxussdzmjagwimai7cmx8n9ac62dy5fy3euzwf190wq2hy6rc6ojg2jw76aox944j8us65ns1fcjooi33gjk5tob82ghu6v75yi797yfyp6su5zniay4wum8xil0s3cqpn01ig3vhazpmkf0ezrh7bodliovwxp3xaklibs1q1 == \u\8\5\r\h\h\n\z\3\b\9\u\q\h\n\8\z\4\e\r\a\d\t\j\4\f\m\8\x\q\o\a\b\2\5\d\d\1\w\n\z\u\o\v\7\a\7\d\f\d\j\t\b\b\x\s\j\5\x\x\8\b\d\0\s\5\d\4\t\p\k\9\m\8\m\0\5\6\c\u\u\7\w\1\0\w\n\8\k\7\i\7\p\0\i\b\e\s\f\t\z\a\y\k\c\4\t\a\4\g\k\t\u\v\y\f\e\v\c\y\y\i\i\i\l\i\i\z\1\x\a\e\c\n\f\o\4\e\a\i\6\7\8\7\r\e\a\v\v\6\0\3\v\3\w\x\p\6\0\d\g\2\l\c\0\1\s\k\i\z\u\n\u\m\5\6\a\8\b\h\5\j\1\q\1\8\v\a\r\g\5\z\3\2\a\j\d\8\b\u\i\1\t\v\o\i\b\f\a\3\w\p\6\s\n\t\o\y\k\b\c\r\g\l\g\5\q\z\v\w\g\r\v\p\9\r\6\a\m\0\w\s\6\x\x\5\7\8\4\6\b\x\4\y\j\l\f\d\5\r\a\3\y\w\5\j\r\e\f\o\9\p\3\d\0\s\9\7\j\7\f\m\6\8\s\j\u\c\x\u\h\a\f\1\0\o\h\w\a\1\c\3\s\p\q\b\n\c\d\1\j\k\p\6\o\4\s\6\f\r\2\6\p\p\c\t\z\x\7\v\i\s\8\t\t\7\1\v\1\s\f\r\a\x\u\s\s\d\z\m\j\a\g\w\i\m\a\i\7\c\m\x\8\n\9\a\c\6\2\d\y\5\f\y\3\e\u\z\w\f\1\9\0\w\q\2\h\y\6\r\c\6\o\j\g\2\j\w\7\6\a\o\x\9\4\4\j\8\u\s\6\5\n\s\1\f\c\j\o\o\i\3\3\g\j\k\5\t\o\b\8\2\g\h\u\6\v\7\5\y\i\7\9\7\y\f\y\p\6\s\u\5\z\n\i\a\y\4\w\u\m\8\x\i\l\0\s\3\c\q\p\n\0\1\i\g\3\v\h\a\z\p\m\k\f\0\e\z\r\h\7\b\o\d\l\i\o\v\w\x\p\3\x\a\k\l\i\b\s\1\q\1 ]] 00:06:25.800 00:06:25.800 real 0m2.289s 00:06:25.800 user 0m1.366s 00:06:25.800 sys 0m0.782s 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:25.800 ************************************ 00:06:25.800 END TEST dd_flag_nofollow 00:06:25.800 ************************************ 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.800 07:31:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.800 ************************************ 00:06:25.800 START TEST dd_flag_noatime 00:06:25.800 ************************************ 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721979110 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721979111 00:06:25.801 07:31:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:27.176 07:31:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.176 [2024-07-26 07:31:52.411910] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:27.176 [2024-07-26 07:31:52.411992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:06:27.176 [2024-07-26 07:31:52.547801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.176 [2024-07-26 07:31:52.676871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.176 [2024-07-26 07:31:52.756050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.692  Copying: 512/512 [B] (average 500 kBps) 00:06:27.692 00:06:27.692 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.692 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721979110 )) 00:06:27.693 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.693 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721979111 )) 00:06:27.693 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.693 [2024-07-26 07:31:53.182082] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:27.693 [2024-07-26 07:31:53.182204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62431 ] 00:06:27.959 [2024-07-26 07:31:53.318829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.959 [2024-07-26 07:31:53.427911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.959 [2024-07-26 07:31:53.509205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.526  Copying: 512/512 [B] (average 500 kBps) 00:06:28.526 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721979113 )) 00:06:28.526 ************************************ 00:06:28.526 END TEST dd_flag_noatime 00:06:28.526 ************************************ 00:06:28.526 00:06:28.526 real 0m2.552s 00:06:28.526 user 0m0.905s 00:06:28.526 sys 0m0.820s 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.526 ************************************ 00:06:28.526 START TEST dd_flags_misc 00:06:28.526 ************************************ 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.526 07:31:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:28.526 [2024-07-26 07:31:54.003534] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:28.526 [2024-07-26 07:31:54.003648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62465 ] 00:06:28.784 [2024-07-26 07:31:54.142822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.784 [2024-07-26 07:31:54.274446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.784 [2024-07-26 07:31:54.354992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.374  Copying: 512/512 [B] (average 500 kBps) 00:06:29.374 00:06:29.374 07:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cxma8b6bcy1rtet4u1xb9x0h9b685bmdkgq39edi69l0cmndn0nh6uqe0ebv64d67ag1liim8yecpo2s0iqm2qhayloempnmgxi6bufv8ndh161nla8ecyg4qxn1occ7ztuk1o7xdnucn4mwf494qtqdo5fgf5imdhvqdsmezt2kzxjl14tivfs5120h8yedh897bncs7irmuxejdx8hfpqrpmrmv8barxsehuwu61yywj49cmi3mqeaslin9ulir13so2utwn4mbsx2tzq1qwmonyuhtik2zwtc4nequ50e2zfvhcekbz76eyfpsf50q2wa1rbnwta4r0cro8a7yfyfkaup283tm3zl0t6gswhgcp8gg2ym326ogf71emsp0y7tmsuycmt2a26r0lonbb7rn0vm5lbgcigicyr4otz2ptaxx13uxsu6z5f1uc7wfcoc78mwqy1c4nsaffyfebyt1d57elr6e8j2nxsi89xl0ii0aoni0k8hwg57v7vx == \c\x\m\a\8\b\6\b\c\y\1\r\t\e\t\4\u\1\x\b\9\x\0\h\9\b\6\8\5\b\m\d\k\g\q\3\9\e\d\i\6\9\l\0\c\m\n\d\n\0\n\h\6\u\q\e\0\e\b\v\6\4\d\6\7\a\g\1\l\i\i\m\8\y\e\c\p\o\2\s\0\i\q\m\2\q\h\a\y\l\o\e\m\p\n\m\g\x\i\6\b\u\f\v\8\n\d\h\1\6\1\n\l\a\8\e\c\y\g\4\q\x\n\1\o\c\c\7\z\t\u\k\1\o\7\x\d\n\u\c\n\4\m\w\f\4\9\4\q\t\q\d\o\5\f\g\f\5\i\m\d\h\v\q\d\s\m\e\z\t\2\k\z\x\j\l\1\4\t\i\v\f\s\5\1\2\0\h\8\y\e\d\h\8\9\7\b\n\c\s\7\i\r\m\u\x\e\j\d\x\8\h\f\p\q\r\p\m\r\m\v\8\b\a\r\x\s\e\h\u\w\u\6\1\y\y\w\j\4\9\c\m\i\3\m\q\e\a\s\l\i\n\9\u\l\i\r\1\3\s\o\2\u\t\w\n\4\m\b\s\x\2\t\z\q\1\q\w\m\o\n\y\u\h\t\i\k\2\z\w\t\c\4\n\e\q\u\5\0\e\2\z\f\v\h\c\e\k\b\z\7\6\e\y\f\p\s\f\5\0\q\2\w\a\1\r\b\n\w\t\a\4\r\0\c\r\o\8\a\7\y\f\y\f\k\a\u\p\2\8\3\t\m\3\z\l\0\t\6\g\s\w\h\g\c\p\8\g\g\2\y\m\3\2\6\o\g\f\7\1\e\m\s\p\0\y\7\t\m\s\u\y\c\m\t\2\a\2\6\r\0\l\o\n\b\b\7\r\n\0\v\m\5\l\b\g\c\i\g\i\c\y\r\4\o\t\z\2\p\t\a\x\x\1\3\u\x\s\u\6\z\5\f\1\u\c\7\w\f\c\o\c\7\8\m\w\q\y\1\c\4\n\s\a\f\f\y\f\e\b\y\t\1\d\5\7\e\l\r\6\e\8\j\2\n\x\s\i\8\9\x\l\0\i\i\0\a\o\n\i\0\k\8\h\w\g\5\7\v\7\v\x ]] 00:06:29.374 07:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.374 07:31:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:29.374 [2024-07-26 07:31:54.763555] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:29.374 [2024-07-26 07:31:54.763657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62475 ] 00:06:29.374 [2024-07-26 07:31:54.905670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.644 [2024-07-26 07:31:55.045816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.644 [2024-07-26 07:31:55.128876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.903  Copying: 512/512 [B] (average 500 kBps) 00:06:29.903 00:06:29.903 07:31:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cxma8b6bcy1rtet4u1xb9x0h9b685bmdkgq39edi69l0cmndn0nh6uqe0ebv64d67ag1liim8yecpo2s0iqm2qhayloempnmgxi6bufv8ndh161nla8ecyg4qxn1occ7ztuk1o7xdnucn4mwf494qtqdo5fgf5imdhvqdsmezt2kzxjl14tivfs5120h8yedh897bncs7irmuxejdx8hfpqrpmrmv8barxsehuwu61yywj49cmi3mqeaslin9ulir13so2utwn4mbsx2tzq1qwmonyuhtik2zwtc4nequ50e2zfvhcekbz76eyfpsf50q2wa1rbnwta4r0cro8a7yfyfkaup283tm3zl0t6gswhgcp8gg2ym326ogf71emsp0y7tmsuycmt2a26r0lonbb7rn0vm5lbgcigicyr4otz2ptaxx13uxsu6z5f1uc7wfcoc78mwqy1c4nsaffyfebyt1d57elr6e8j2nxsi89xl0ii0aoni0k8hwg57v7vx == \c\x\m\a\8\b\6\b\c\y\1\r\t\e\t\4\u\1\x\b\9\x\0\h\9\b\6\8\5\b\m\d\k\g\q\3\9\e\d\i\6\9\l\0\c\m\n\d\n\0\n\h\6\u\q\e\0\e\b\v\6\4\d\6\7\a\g\1\l\i\i\m\8\y\e\c\p\o\2\s\0\i\q\m\2\q\h\a\y\l\o\e\m\p\n\m\g\x\i\6\b\u\f\v\8\n\d\h\1\6\1\n\l\a\8\e\c\y\g\4\q\x\n\1\o\c\c\7\z\t\u\k\1\o\7\x\d\n\u\c\n\4\m\w\f\4\9\4\q\t\q\d\o\5\f\g\f\5\i\m\d\h\v\q\d\s\m\e\z\t\2\k\z\x\j\l\1\4\t\i\v\f\s\5\1\2\0\h\8\y\e\d\h\8\9\7\b\n\c\s\7\i\r\m\u\x\e\j\d\x\8\h\f\p\q\r\p\m\r\m\v\8\b\a\r\x\s\e\h\u\w\u\6\1\y\y\w\j\4\9\c\m\i\3\m\q\e\a\s\l\i\n\9\u\l\i\r\1\3\s\o\2\u\t\w\n\4\m\b\s\x\2\t\z\q\1\q\w\m\o\n\y\u\h\t\i\k\2\z\w\t\c\4\n\e\q\u\5\0\e\2\z\f\v\h\c\e\k\b\z\7\6\e\y\f\p\s\f\5\0\q\2\w\a\1\r\b\n\w\t\a\4\r\0\c\r\o\8\a\7\y\f\y\f\k\a\u\p\2\8\3\t\m\3\z\l\0\t\6\g\s\w\h\g\c\p\8\g\g\2\y\m\3\2\6\o\g\f\7\1\e\m\s\p\0\y\7\t\m\s\u\y\c\m\t\2\a\2\6\r\0\l\o\n\b\b\7\r\n\0\v\m\5\l\b\g\c\i\g\i\c\y\r\4\o\t\z\2\p\t\a\x\x\1\3\u\x\s\u\6\z\5\f\1\u\c\7\w\f\c\o\c\7\8\m\w\q\y\1\c\4\n\s\a\f\f\y\f\e\b\y\t\1\d\5\7\e\l\r\6\e\8\j\2\n\x\s\i\8\9\x\l\0\i\i\0\a\o\n\i\0\k\8\h\w\g\5\7\v\7\v\x ]] 00:06:29.903 07:31:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.903 07:31:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:30.160 [2024-07-26 07:31:55.560249] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:30.160 [2024-07-26 07:31:55.560405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62490 ] 00:06:30.160 [2024-07-26 07:31:55.710494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.418 [2024-07-26 07:31:55.845587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.418 [2024-07-26 07:31:55.929023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.676  Copying: 512/512 [B] (average 166 kBps) 00:06:30.676 00:06:30.936 07:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cxma8b6bcy1rtet4u1xb9x0h9b685bmdkgq39edi69l0cmndn0nh6uqe0ebv64d67ag1liim8yecpo2s0iqm2qhayloempnmgxi6bufv8ndh161nla8ecyg4qxn1occ7ztuk1o7xdnucn4mwf494qtqdo5fgf5imdhvqdsmezt2kzxjl14tivfs5120h8yedh897bncs7irmuxejdx8hfpqrpmrmv8barxsehuwu61yywj49cmi3mqeaslin9ulir13so2utwn4mbsx2tzq1qwmonyuhtik2zwtc4nequ50e2zfvhcekbz76eyfpsf50q2wa1rbnwta4r0cro8a7yfyfkaup283tm3zl0t6gswhgcp8gg2ym326ogf71emsp0y7tmsuycmt2a26r0lonbb7rn0vm5lbgcigicyr4otz2ptaxx13uxsu6z5f1uc7wfcoc78mwqy1c4nsaffyfebyt1d57elr6e8j2nxsi89xl0ii0aoni0k8hwg57v7vx == \c\x\m\a\8\b\6\b\c\y\1\r\t\e\t\4\u\1\x\b\9\x\0\h\9\b\6\8\5\b\m\d\k\g\q\3\9\e\d\i\6\9\l\0\c\m\n\d\n\0\n\h\6\u\q\e\0\e\b\v\6\4\d\6\7\a\g\1\l\i\i\m\8\y\e\c\p\o\2\s\0\i\q\m\2\q\h\a\y\l\o\e\m\p\n\m\g\x\i\6\b\u\f\v\8\n\d\h\1\6\1\n\l\a\8\e\c\y\g\4\q\x\n\1\o\c\c\7\z\t\u\k\1\o\7\x\d\n\u\c\n\4\m\w\f\4\9\4\q\t\q\d\o\5\f\g\f\5\i\m\d\h\v\q\d\s\m\e\z\t\2\k\z\x\j\l\1\4\t\i\v\f\s\5\1\2\0\h\8\y\e\d\h\8\9\7\b\n\c\s\7\i\r\m\u\x\e\j\d\x\8\h\f\p\q\r\p\m\r\m\v\8\b\a\r\x\s\e\h\u\w\u\6\1\y\y\w\j\4\9\c\m\i\3\m\q\e\a\s\l\i\n\9\u\l\i\r\1\3\s\o\2\u\t\w\n\4\m\b\s\x\2\t\z\q\1\q\w\m\o\n\y\u\h\t\i\k\2\z\w\t\c\4\n\e\q\u\5\0\e\2\z\f\v\h\c\e\k\b\z\7\6\e\y\f\p\s\f\5\0\q\2\w\a\1\r\b\n\w\t\a\4\r\0\c\r\o\8\a\7\y\f\y\f\k\a\u\p\2\8\3\t\m\3\z\l\0\t\6\g\s\w\h\g\c\p\8\g\g\2\y\m\3\2\6\o\g\f\7\1\e\m\s\p\0\y\7\t\m\s\u\y\c\m\t\2\a\2\6\r\0\l\o\n\b\b\7\r\n\0\v\m\5\l\b\g\c\i\g\i\c\y\r\4\o\t\z\2\p\t\a\x\x\1\3\u\x\s\u\6\z\5\f\1\u\c\7\w\f\c\o\c\7\8\m\w\q\y\1\c\4\n\s\a\f\f\y\f\e\b\y\t\1\d\5\7\e\l\r\6\e\8\j\2\n\x\s\i\8\9\x\l\0\i\i\0\a\o\n\i\0\k\8\h\w\g\5\7\v\7\v\x ]] 00:06:30.936 07:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.936 07:31:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:30.936 [2024-07-26 07:31:56.337891] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:30.936 [2024-07-26 07:31:56.337985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62499 ] 00:06:30.936 [2024-07-26 07:31:56.472256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.194 [2024-07-26 07:31:56.581124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.194 [2024-07-26 07:31:56.659648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.453  Copying: 512/512 [B] (average 250 kBps) 00:06:31.453 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cxma8b6bcy1rtet4u1xb9x0h9b685bmdkgq39edi69l0cmndn0nh6uqe0ebv64d67ag1liim8yecpo2s0iqm2qhayloempnmgxi6bufv8ndh161nla8ecyg4qxn1occ7ztuk1o7xdnucn4mwf494qtqdo5fgf5imdhvqdsmezt2kzxjl14tivfs5120h8yedh897bncs7irmuxejdx8hfpqrpmrmv8barxsehuwu61yywj49cmi3mqeaslin9ulir13so2utwn4mbsx2tzq1qwmonyuhtik2zwtc4nequ50e2zfvhcekbz76eyfpsf50q2wa1rbnwta4r0cro8a7yfyfkaup283tm3zl0t6gswhgcp8gg2ym326ogf71emsp0y7tmsuycmt2a26r0lonbb7rn0vm5lbgcigicyr4otz2ptaxx13uxsu6z5f1uc7wfcoc78mwqy1c4nsaffyfebyt1d57elr6e8j2nxsi89xl0ii0aoni0k8hwg57v7vx == \c\x\m\a\8\b\6\b\c\y\1\r\t\e\t\4\u\1\x\b\9\x\0\h\9\b\6\8\5\b\m\d\k\g\q\3\9\e\d\i\6\9\l\0\c\m\n\d\n\0\n\h\6\u\q\e\0\e\b\v\6\4\d\6\7\a\g\1\l\i\i\m\8\y\e\c\p\o\2\s\0\i\q\m\2\q\h\a\y\l\o\e\m\p\n\m\g\x\i\6\b\u\f\v\8\n\d\h\1\6\1\n\l\a\8\e\c\y\g\4\q\x\n\1\o\c\c\7\z\t\u\k\1\o\7\x\d\n\u\c\n\4\m\w\f\4\9\4\q\t\q\d\o\5\f\g\f\5\i\m\d\h\v\q\d\s\m\e\z\t\2\k\z\x\j\l\1\4\t\i\v\f\s\5\1\2\0\h\8\y\e\d\h\8\9\7\b\n\c\s\7\i\r\m\u\x\e\j\d\x\8\h\f\p\q\r\p\m\r\m\v\8\b\a\r\x\s\e\h\u\w\u\6\1\y\y\w\j\4\9\c\m\i\3\m\q\e\a\s\l\i\n\9\u\l\i\r\1\3\s\o\2\u\t\w\n\4\m\b\s\x\2\t\z\q\1\q\w\m\o\n\y\u\h\t\i\k\2\z\w\t\c\4\n\e\q\u\5\0\e\2\z\f\v\h\c\e\k\b\z\7\6\e\y\f\p\s\f\5\0\q\2\w\a\1\r\b\n\w\t\a\4\r\0\c\r\o\8\a\7\y\f\y\f\k\a\u\p\2\8\3\t\m\3\z\l\0\t\6\g\s\w\h\g\c\p\8\g\g\2\y\m\3\2\6\o\g\f\7\1\e\m\s\p\0\y\7\t\m\s\u\y\c\m\t\2\a\2\6\r\0\l\o\n\b\b\7\r\n\0\v\m\5\l\b\g\c\i\g\i\c\y\r\4\o\t\z\2\p\t\a\x\x\1\3\u\x\s\u\6\z\5\f\1\u\c\7\w\f\c\o\c\7\8\m\w\q\y\1\c\4\n\s\a\f\f\y\f\e\b\y\t\1\d\5\7\e\l\r\6\e\8\j\2\n\x\s\i\8\9\x\l\0\i\i\0\a\o\n\i\0\k\8\h\w\g\5\7\v\7\v\x ]] 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.453 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:31.712 [2024-07-26 07:31:57.074414] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:31.712 [2024-07-26 07:31:57.074547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62511 ] 00:06:31.712 [2024-07-26 07:31:57.216152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.970 [2024-07-26 07:31:57.355225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.970 [2024-07-26 07:31:57.435295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.228  Copying: 512/512 [B] (average 500 kBps) 00:06:32.228 00:06:32.228 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0xh10hz0fqw9l57fro2dyaqwwy26yu68qzq0z8m8743h40ewedy0ypffremadiyzhfypkigasv9u6hu981j7hlfrxup4kg19yrp4sb2p1wxpvad8wer71z99cfy90cxii1ztvs994jznaa9w6mfh0hkhy0fma7z3ouevxc14hzqq2l5481ouhk9x90u9wqf7yt1lg38ok0advk8j3g8szjke3vnxtra4c2n2ul0lkoqd5so3e3hwm2gavje5pazmn2v30ws63fmc81dpa3nd8jgpx6m42crrzo0z9hw7myqkk3jrrk9ry809x6abuwso9wf04k2ea3n2ju8knxt843zq5j7mvehtljrlp426s00724u87vp2o33b194irx4wectqib1kr3ok63j5tj20sf04shhbhdutxrntl8klw7bs1osaaxqj7nkg3wngez3pslow13enps99gs2ohs3q45l1gjmqhwqsd9x9wxfml2v9s8jo9fo614abk3xxebrs == \0\x\h\1\0\h\z\0\f\q\w\9\l\5\7\f\r\o\2\d\y\a\q\w\w\y\2\6\y\u\6\8\q\z\q\0\z\8\m\8\7\4\3\h\4\0\e\w\e\d\y\0\y\p\f\f\r\e\m\a\d\i\y\z\h\f\y\p\k\i\g\a\s\v\9\u\6\h\u\9\8\1\j\7\h\l\f\r\x\u\p\4\k\g\1\9\y\r\p\4\s\b\2\p\1\w\x\p\v\a\d\8\w\e\r\7\1\z\9\9\c\f\y\9\0\c\x\i\i\1\z\t\v\s\9\9\4\j\z\n\a\a\9\w\6\m\f\h\0\h\k\h\y\0\f\m\a\7\z\3\o\u\e\v\x\c\1\4\h\z\q\q\2\l\5\4\8\1\o\u\h\k\9\x\9\0\u\9\w\q\f\7\y\t\1\l\g\3\8\o\k\0\a\d\v\k\8\j\3\g\8\s\z\j\k\e\3\v\n\x\t\r\a\4\c\2\n\2\u\l\0\l\k\o\q\d\5\s\o\3\e\3\h\w\m\2\g\a\v\j\e\5\p\a\z\m\n\2\v\3\0\w\s\6\3\f\m\c\8\1\d\p\a\3\n\d\8\j\g\p\x\6\m\4\2\c\r\r\z\o\0\z\9\h\w\7\m\y\q\k\k\3\j\r\r\k\9\r\y\8\0\9\x\6\a\b\u\w\s\o\9\w\f\0\4\k\2\e\a\3\n\2\j\u\8\k\n\x\t\8\4\3\z\q\5\j\7\m\v\e\h\t\l\j\r\l\p\4\2\6\s\0\0\7\2\4\u\8\7\v\p\2\o\3\3\b\1\9\4\i\r\x\4\w\e\c\t\q\i\b\1\k\r\3\o\k\6\3\j\5\t\j\2\0\s\f\0\4\s\h\h\b\h\d\u\t\x\r\n\t\l\8\k\l\w\7\b\s\1\o\s\a\a\x\q\j\7\n\k\g\3\w\n\g\e\z\3\p\s\l\o\w\1\3\e\n\p\s\9\9\g\s\2\o\h\s\3\q\4\5\l\1\g\j\m\q\h\w\q\s\d\9\x\9\w\x\f\m\l\2\v\9\s\8\j\o\9\f\o\6\1\4\a\b\k\3\x\x\e\b\r\s ]] 00:06:32.228 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.228 07:31:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:32.486 [2024-07-26 07:31:57.832959] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:32.486 [2024-07-26 07:31:57.833059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62524 ] 00:06:32.486 [2024-07-26 07:31:57.971240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.744 [2024-07-26 07:31:58.106306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.744 [2024-07-26 07:31:58.185873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.003  Copying: 512/512 [B] (average 500 kBps) 00:06:33.003 00:06:33.003 07:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0xh10hz0fqw9l57fro2dyaqwwy26yu68qzq0z8m8743h40ewedy0ypffremadiyzhfypkigasv9u6hu981j7hlfrxup4kg19yrp4sb2p1wxpvad8wer71z99cfy90cxii1ztvs994jznaa9w6mfh0hkhy0fma7z3ouevxc14hzqq2l5481ouhk9x90u9wqf7yt1lg38ok0advk8j3g8szjke3vnxtra4c2n2ul0lkoqd5so3e3hwm2gavje5pazmn2v30ws63fmc81dpa3nd8jgpx6m42crrzo0z9hw7myqkk3jrrk9ry809x6abuwso9wf04k2ea3n2ju8knxt843zq5j7mvehtljrlp426s00724u87vp2o33b194irx4wectqib1kr3ok63j5tj20sf04shhbhdutxrntl8klw7bs1osaaxqj7nkg3wngez3pslow13enps99gs2ohs3q45l1gjmqhwqsd9x9wxfml2v9s8jo9fo614abk3xxebrs == \0\x\h\1\0\h\z\0\f\q\w\9\l\5\7\f\r\o\2\d\y\a\q\w\w\y\2\6\y\u\6\8\q\z\q\0\z\8\m\8\7\4\3\h\4\0\e\w\e\d\y\0\y\p\f\f\r\e\m\a\d\i\y\z\h\f\y\p\k\i\g\a\s\v\9\u\6\h\u\9\8\1\j\7\h\l\f\r\x\u\p\4\k\g\1\9\y\r\p\4\s\b\2\p\1\w\x\p\v\a\d\8\w\e\r\7\1\z\9\9\c\f\y\9\0\c\x\i\i\1\z\t\v\s\9\9\4\j\z\n\a\a\9\w\6\m\f\h\0\h\k\h\y\0\f\m\a\7\z\3\o\u\e\v\x\c\1\4\h\z\q\q\2\l\5\4\8\1\o\u\h\k\9\x\9\0\u\9\w\q\f\7\y\t\1\l\g\3\8\o\k\0\a\d\v\k\8\j\3\g\8\s\z\j\k\e\3\v\n\x\t\r\a\4\c\2\n\2\u\l\0\l\k\o\q\d\5\s\o\3\e\3\h\w\m\2\g\a\v\j\e\5\p\a\z\m\n\2\v\3\0\w\s\6\3\f\m\c\8\1\d\p\a\3\n\d\8\j\g\p\x\6\m\4\2\c\r\r\z\o\0\z\9\h\w\7\m\y\q\k\k\3\j\r\r\k\9\r\y\8\0\9\x\6\a\b\u\w\s\o\9\w\f\0\4\k\2\e\a\3\n\2\j\u\8\k\n\x\t\8\4\3\z\q\5\j\7\m\v\e\h\t\l\j\r\l\p\4\2\6\s\0\0\7\2\4\u\8\7\v\p\2\o\3\3\b\1\9\4\i\r\x\4\w\e\c\t\q\i\b\1\k\r\3\o\k\6\3\j\5\t\j\2\0\s\f\0\4\s\h\h\b\h\d\u\t\x\r\n\t\l\8\k\l\w\7\b\s\1\o\s\a\a\x\q\j\7\n\k\g\3\w\n\g\e\z\3\p\s\l\o\w\1\3\e\n\p\s\9\9\g\s\2\o\h\s\3\q\4\5\l\1\g\j\m\q\h\w\q\s\d\9\x\9\w\x\f\m\l\2\v\9\s\8\j\o\9\f\o\6\1\4\a\b\k\3\x\x\e\b\r\s ]] 00:06:33.003 07:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.003 07:31:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:33.003 [2024-07-26 07:31:58.589162] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:33.003 [2024-07-26 07:31:58.589266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62534 ] 00:06:33.261 [2024-07-26 07:31:58.730540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.520 [2024-07-26 07:31:58.880004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.520 [2024-07-26 07:31:58.964885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.778  Copying: 512/512 [B] (average 83 kBps) 00:06:33.778 00:06:33.778 07:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0xh10hz0fqw9l57fro2dyaqwwy26yu68qzq0z8m8743h40ewedy0ypffremadiyzhfypkigasv9u6hu981j7hlfrxup4kg19yrp4sb2p1wxpvad8wer71z99cfy90cxii1ztvs994jznaa9w6mfh0hkhy0fma7z3ouevxc14hzqq2l5481ouhk9x90u9wqf7yt1lg38ok0advk8j3g8szjke3vnxtra4c2n2ul0lkoqd5so3e3hwm2gavje5pazmn2v30ws63fmc81dpa3nd8jgpx6m42crrzo0z9hw7myqkk3jrrk9ry809x6abuwso9wf04k2ea3n2ju8knxt843zq5j7mvehtljrlp426s00724u87vp2o33b194irx4wectqib1kr3ok63j5tj20sf04shhbhdutxrntl8klw7bs1osaaxqj7nkg3wngez3pslow13enps99gs2ohs3q45l1gjmqhwqsd9x9wxfml2v9s8jo9fo614abk3xxebrs == \0\x\h\1\0\h\z\0\f\q\w\9\l\5\7\f\r\o\2\d\y\a\q\w\w\y\2\6\y\u\6\8\q\z\q\0\z\8\m\8\7\4\3\h\4\0\e\w\e\d\y\0\y\p\f\f\r\e\m\a\d\i\y\z\h\f\y\p\k\i\g\a\s\v\9\u\6\h\u\9\8\1\j\7\h\l\f\r\x\u\p\4\k\g\1\9\y\r\p\4\s\b\2\p\1\w\x\p\v\a\d\8\w\e\r\7\1\z\9\9\c\f\y\9\0\c\x\i\i\1\z\t\v\s\9\9\4\j\z\n\a\a\9\w\6\m\f\h\0\h\k\h\y\0\f\m\a\7\z\3\o\u\e\v\x\c\1\4\h\z\q\q\2\l\5\4\8\1\o\u\h\k\9\x\9\0\u\9\w\q\f\7\y\t\1\l\g\3\8\o\k\0\a\d\v\k\8\j\3\g\8\s\z\j\k\e\3\v\n\x\t\r\a\4\c\2\n\2\u\l\0\l\k\o\q\d\5\s\o\3\e\3\h\w\m\2\g\a\v\j\e\5\p\a\z\m\n\2\v\3\0\w\s\6\3\f\m\c\8\1\d\p\a\3\n\d\8\j\g\p\x\6\m\4\2\c\r\r\z\o\0\z\9\h\w\7\m\y\q\k\k\3\j\r\r\k\9\r\y\8\0\9\x\6\a\b\u\w\s\o\9\w\f\0\4\k\2\e\a\3\n\2\j\u\8\k\n\x\t\8\4\3\z\q\5\j\7\m\v\e\h\t\l\j\r\l\p\4\2\6\s\0\0\7\2\4\u\8\7\v\p\2\o\3\3\b\1\9\4\i\r\x\4\w\e\c\t\q\i\b\1\k\r\3\o\k\6\3\j\5\t\j\2\0\s\f\0\4\s\h\h\b\h\d\u\t\x\r\n\t\l\8\k\l\w\7\b\s\1\o\s\a\a\x\q\j\7\n\k\g\3\w\n\g\e\z\3\p\s\l\o\w\1\3\e\n\p\s\9\9\g\s\2\o\h\s\3\q\4\5\l\1\g\j\m\q\h\w\q\s\d\9\x\9\w\x\f\m\l\2\v\9\s\8\j\o\9\f\o\6\1\4\a\b\k\3\x\x\e\b\r\s ]] 00:06:33.778 07:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.778 07:31:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:33.778 [2024-07-26 07:31:59.375894] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:33.778 [2024-07-26 07:31:59.376032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62548 ] 00:06:34.036 [2024-07-26 07:31:59.518164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.294 [2024-07-26 07:31:59.684667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.294 [2024-07-26 07:31:59.766878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.552  Copying: 512/512 [B] (average 500 kBps) 00:06:34.552 00:06:34.552 07:32:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0xh10hz0fqw9l57fro2dyaqwwy26yu68qzq0z8m8743h40ewedy0ypffremadiyzhfypkigasv9u6hu981j7hlfrxup4kg19yrp4sb2p1wxpvad8wer71z99cfy90cxii1ztvs994jznaa9w6mfh0hkhy0fma7z3ouevxc14hzqq2l5481ouhk9x90u9wqf7yt1lg38ok0advk8j3g8szjke3vnxtra4c2n2ul0lkoqd5so3e3hwm2gavje5pazmn2v30ws63fmc81dpa3nd8jgpx6m42crrzo0z9hw7myqkk3jrrk9ry809x6abuwso9wf04k2ea3n2ju8knxt843zq5j7mvehtljrlp426s00724u87vp2o33b194irx4wectqib1kr3ok63j5tj20sf04shhbhdutxrntl8klw7bs1osaaxqj7nkg3wngez3pslow13enps99gs2ohs3q45l1gjmqhwqsd9x9wxfml2v9s8jo9fo614abk3xxebrs == \0\x\h\1\0\h\z\0\f\q\w\9\l\5\7\f\r\o\2\d\y\a\q\w\w\y\2\6\y\u\6\8\q\z\q\0\z\8\m\8\7\4\3\h\4\0\e\w\e\d\y\0\y\p\f\f\r\e\m\a\d\i\y\z\h\f\y\p\k\i\g\a\s\v\9\u\6\h\u\9\8\1\j\7\h\l\f\r\x\u\p\4\k\g\1\9\y\r\p\4\s\b\2\p\1\w\x\p\v\a\d\8\w\e\r\7\1\z\9\9\c\f\y\9\0\c\x\i\i\1\z\t\v\s\9\9\4\j\z\n\a\a\9\w\6\m\f\h\0\h\k\h\y\0\f\m\a\7\z\3\o\u\e\v\x\c\1\4\h\z\q\q\2\l\5\4\8\1\o\u\h\k\9\x\9\0\u\9\w\q\f\7\y\t\1\l\g\3\8\o\k\0\a\d\v\k\8\j\3\g\8\s\z\j\k\e\3\v\n\x\t\r\a\4\c\2\n\2\u\l\0\l\k\o\q\d\5\s\o\3\e\3\h\w\m\2\g\a\v\j\e\5\p\a\z\m\n\2\v\3\0\w\s\6\3\f\m\c\8\1\d\p\a\3\n\d\8\j\g\p\x\6\m\4\2\c\r\r\z\o\0\z\9\h\w\7\m\y\q\k\k\3\j\r\r\k\9\r\y\8\0\9\x\6\a\b\u\w\s\o\9\w\f\0\4\k\2\e\a\3\n\2\j\u\8\k\n\x\t\8\4\3\z\q\5\j\7\m\v\e\h\t\l\j\r\l\p\4\2\6\s\0\0\7\2\4\u\8\7\v\p\2\o\3\3\b\1\9\4\i\r\x\4\w\e\c\t\q\i\b\1\k\r\3\o\k\6\3\j\5\t\j\2\0\s\f\0\4\s\h\h\b\h\d\u\t\x\r\n\t\l\8\k\l\w\7\b\s\1\o\s\a\a\x\q\j\7\n\k\g\3\w\n\g\e\z\3\p\s\l\o\w\1\3\e\n\p\s\9\9\g\s\2\o\h\s\3\q\4\5\l\1\g\j\m\q\h\w\q\s\d\9\x\9\w\x\f\m\l\2\v\9\s\8\j\o\9\f\o\6\1\4\a\b\k\3\x\x\e\b\r\s ]] 00:06:34.552 00:06:34.552 real 0m6.193s 00:06:34.552 user 0m3.714s 00:06:34.552 sys 0m3.096s 00:06:34.552 07:32:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.552 07:32:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:34.552 ************************************ 00:06:34.552 END TEST dd_flags_misc 00:06:34.552 ************************************ 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:34.810 * Second test run, disabling liburing, forcing AIO 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.810 ************************************ 00:06:34.810 START TEST dd_flag_append_forced_aio 00:06:34.810 ************************************ 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=x24bbaln1asq9kj3vngx581gmg9g0g62 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=npckr5n2d1omfgb0h6etk397c44dzfpz 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s x24bbaln1asq9kj3vngx581gmg9g0g62 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s npckr5n2d1omfgb0h6etk397c44dzfpz 00:06:34.810 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:34.810 [2024-07-26 07:32:00.247586] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:34.810 [2024-07-26 07:32:00.248357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62581 ] 00:06:34.810 [2024-07-26 07:32:00.389328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.069 [2024-07-26 07:32:00.512781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.069 [2024-07-26 07:32:00.592849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.636  Copying: 32/32 [B] (average 31 kBps) 00:06:35.636 00:06:35.636 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ npckr5n2d1omfgb0h6etk397c44dzfpzx24bbaln1asq9kj3vngx581gmg9g0g62 == \n\p\c\k\r\5\n\2\d\1\o\m\f\g\b\0\h\6\e\t\k\3\9\7\c\4\4\d\z\f\p\z\x\2\4\b\b\a\l\n\1\a\s\q\9\k\j\3\v\n\g\x\5\8\1\g\m\g\9\g\0\g\6\2 ]] 00:06:35.636 00:06:35.636 real 0m0.808s 00:06:35.636 user 0m0.486s 00:06:35.636 sys 0m0.198s 00:06:35.636 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.636 07:32:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 ************************************ 00:06:35.636 END TEST dd_flag_append_forced_aio 00:06:35.636 ************************************ 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 ************************************ 00:06:35.636 START TEST dd_flag_directory_forced_aio 00:06:35.636 ************************************ 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.636 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:35.636 [2024-07-26 07:32:01.102586] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:35.636 [2024-07-26 07:32:01.102667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62609 ] 00:06:35.636 [2024-07-26 07:32:01.234511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.895 [2024-07-26 07:32:01.370653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.895 [2024-07-26 07:32:01.446499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.895 [2024-07-26 07:32:01.492414] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:35.895 [2024-07-26 07:32:01.492527] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:35.895 [2024-07-26 07:32:01.492561] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.154 [2024-07-26 07:32:01.667499] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.412 07:32:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:36.412 [2024-07-26 07:32:01.858694] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:36.412 [2024-07-26 07:32:01.858803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62624 ] 00:06:36.412 [2024-07-26 07:32:01.999477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.670 [2024-07-26 07:32:02.133274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.670 [2024-07-26 07:32:02.213385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.670 [2024-07-26 07:32:02.259727] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:36.670 [2024-07-26 07:32:02.259809] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:36.670 [2024-07-26 07:32:02.259826] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.930 [2024-07-26 07:32:02.433458] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.188 ************************************ 00:06:37.188 END TEST dd_flag_directory_forced_aio 00:06:37.188 ************************************ 00:06:37.188 00:06:37.188 real 0m1.510s 00:06:37.188 user 0m0.896s 00:06:37.188 sys 0m0.400s 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:37.188 ************************************ 00:06:37.188 START TEST dd_flag_nofollow_forced_aio 00:06:37.188 ************************************ 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.188 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.189 07:32:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.189 [2024-07-26 07:32:02.683843] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:37.189 [2024-07-26 07:32:02.683940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62657 ] 00:06:37.447 [2024-07-26 07:32:02.822826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.447 [2024-07-26 07:32:02.996441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.704 [2024-07-26 07:32:03.081491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.704 [2024-07-26 07:32:03.136260] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:37.705 [2024-07-26 07:32:03.136349] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:37.705 [2024-07-26 07:32:03.136371] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.961 [2024-07-26 07:32:03.332346] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.961 07:32:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:37.961 [2024-07-26 07:32:03.558111] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:37.961 [2024-07-26 07:32:03.558230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:06:38.217 [2024-07-26 07:32:03.700271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.475 [2024-07-26 07:32:03.887634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.475 [2024-07-26 07:32:03.978771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.475 [2024-07-26 07:32:04.035581] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:38.475 [2024-07-26 07:32:04.035664] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:38.475 [2024-07-26 07:32:04.035685] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.733 [2024-07-26 07:32:04.239338] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.990 07:32:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.990 [2024-07-26 07:32:04.465519] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:38.990 [2024-07-26 07:32:04.465675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62676 ] 00:06:39.248 [2024-07-26 07:32:04.607298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.248 [2024-07-26 07:32:04.800301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.506 [2024-07-26 07:32:04.889114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.764  Copying: 512/512 [B] (average 500 kBps) 00:06:39.764 00:06:39.764 ************************************ 00:06:39.764 END TEST dd_flag_nofollow_forced_aio 00:06:39.764 ************************************ 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ rldu1n8afbv204odpolmtbh75xnm1b5rh9f74o2dusorty355a7gg9vi7lsml34ua1fgkt1ihfly6yp3hffy14usf7k8yvg7b9e9v6xfgc2sibyu8fjtzqxspb8r7628tpk6dg25f2e6rzj7lb86zjb49011y7pa2g1ffbgexher1cgwo15b0kc9z8jx0oyf3udkxej7f5zxqmc3x0cilv7fxbtrc1gbtm2kt64qo6amj7u0mcikvgproz2lalinz1wrus1nct56wfxk9u4nng9t9j9z825ahi3neqawlkto63k208ddxef30sybkbi0eqncc0su9i0jwf2cb8s0wm12rzdjbhoo0u0rt0bfwp21165p3vx2dhzwljzd78hib9lqfqdk93j2h42p6ojanp2jcwjbq321jplcqhikeqt2gec4qk4qtrp1yd1kwde55o6zgg9t0bu86oheo7pstzykgmp7aq5jz6tldb7bh543l2z4hkv44erdjhz72dwr == \r\l\d\u\1\n\8\a\f\b\v\2\0\4\o\d\p\o\l\m\t\b\h\7\5\x\n\m\1\b\5\r\h\9\f\7\4\o\2\d\u\s\o\r\t\y\3\5\5\a\7\g\g\9\v\i\7\l\s\m\l\3\4\u\a\1\f\g\k\t\1\i\h\f\l\y\6\y\p\3\h\f\f\y\1\4\u\s\f\7\k\8\y\v\g\7\b\9\e\9\v\6\x\f\g\c\2\s\i\b\y\u\8\f\j\t\z\q\x\s\p\b\8\r\7\6\2\8\t\p\k\6\d\g\2\5\f\2\e\6\r\z\j\7\l\b\8\6\z\j\b\4\9\0\1\1\y\7\p\a\2\g\1\f\f\b\g\e\x\h\e\r\1\c\g\w\o\1\5\b\0\k\c\9\z\8\j\x\0\o\y\f\3\u\d\k\x\e\j\7\f\5\z\x\q\m\c\3\x\0\c\i\l\v\7\f\x\b\t\r\c\1\g\b\t\m\2\k\t\6\4\q\o\6\a\m\j\7\u\0\m\c\i\k\v\g\p\r\o\z\2\l\a\l\i\n\z\1\w\r\u\s\1\n\c\t\5\6\w\f\x\k\9\u\4\n\n\g\9\t\9\j\9\z\8\2\5\a\h\i\3\n\e\q\a\w\l\k\t\o\6\3\k\2\0\8\d\d\x\e\f\3\0\s\y\b\k\b\i\0\e\q\n\c\c\0\s\u\9\i\0\j\w\f\2\c\b\8\s\0\w\m\1\2\r\z\d\j\b\h\o\o\0\u\0\r\t\0\b\f\w\p\2\1\1\6\5\p\3\v\x\2\d\h\z\w\l\j\z\d\7\8\h\i\b\9\l\q\f\q\d\k\9\3\j\2\h\4\2\p\6\o\j\a\n\p\2\j\c\w\j\b\q\3\2\1\j\p\l\c\q\h\i\k\e\q\t\2\g\e\c\4\q\k\4\q\t\r\p\1\y\d\1\k\w\d\e\5\5\o\6\z\g\g\9\t\0\b\u\8\6\o\h\e\o\7\p\s\t\z\y\k\g\m\p\7\a\q\5\j\z\6\t\l\d\b\7\b\h\5\4\3\l\2\z\4\h\k\v\4\4\e\r\d\j\h\z\7\2\d\w\r ]] 00:06:39.764 00:06:39.764 real 0m2.702s 00:06:39.764 user 0m1.701s 00:06:39.764 sys 0m0.658s 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.764 07:32:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:40.022 ************************************ 00:06:40.022 START TEST dd_flag_noatime_forced_aio 00:06:40.022 ************************************ 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721979124 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721979125 00:06:40.022 07:32:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:40.955 07:32:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.955 [2024-07-26 07:32:06.450582] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:40.955 [2024-07-26 07:32:06.450918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62721 ] 00:06:41.213 [2024-07-26 07:32:06.586463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.213 [2024-07-26 07:32:06.705800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.213 [2024-07-26 07:32:06.783090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.729  Copying: 512/512 [B] (average 500 kBps) 00:06:41.729 00:06:41.729 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:41.729 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721979124 )) 00:06:41.729 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.729 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721979125 )) 00:06:41.729 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.729 [2024-07-26 07:32:07.237462] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:41.729 [2024-07-26 07:32:07.237611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62738 ] 00:06:41.987 [2024-07-26 07:32:07.374823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.987 [2024-07-26 07:32:07.503644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.987 [2024-07-26 07:32:07.579258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.503  Copying: 512/512 [B] (average 500 kBps) 00:06:42.504 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721979127 )) 00:06:42.504 00:06:42.504 real 0m2.586s 00:06:42.504 user 0m0.936s 00:06:42.504 sys 0m0.399s 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.504 ************************************ 00:06:42.504 END TEST dd_flag_noatime_forced_aio 00:06:42.504 ************************************ 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.504 07:32:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.504 ************************************ 00:06:42.504 START TEST dd_flags_misc_forced_aio 00:06:42.504 ************************************ 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.504 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:42.504 [2024-07-26 07:32:08.069392] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:42.504 [2024-07-26 07:32:08.069498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62770 ] 00:06:42.762 [2024-07-26 07:32:08.206509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.762 [2024-07-26 07:32:08.330206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.021 [2024-07-26 07:32:08.405800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.280  Copying: 512/512 [B] (average 500 kBps) 00:06:43.280 00:06:43.280 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j6yts2g7jd1705ukigkladp9q3rv2yacvozcbht98by29yf54fxwn1woa387sahbt6iuva0xww9v3jix3cpk5k6ybs28dih4boi68u7sm6o9adlw4x4h0d9oy5fe4mowzjdpkpe1isqbl6f7cmf0r17idkyi3tx8ennt4vs61eft67db9vgce2bylzcwiem0tjw8bekhwhve0lyl9oymlpt9g2ipqd1u47gx6xzmrkdfjnj4danihfstidee1p7ch70ct726gpxigy9npy9a0bcpjmc4inxhf6l9gkg3gzl1pdq3qpy6nnpw3nzoy4hnd7rth6hv0e7bcj4y72gfroecbapgmqdnqd28yoiwzsuh2szxe8gltodrp5czg3nnn3tcs5pc4y4keks1it1mgjat0hbq3vguxtqt8ftxvxgmwl7oey94qc7k5po8idga59msf1rfvtc50uu417a8z7afgplql17sqzmd063zgxvhk6y6vv1zenlexgef3uui == \j\6\y\t\s\2\g\7\j\d\1\7\0\5\u\k\i\g\k\l\a\d\p\9\q\3\r\v\2\y\a\c\v\o\z\c\b\h\t\9\8\b\y\2\9\y\f\5\4\f\x\w\n\1\w\o\a\3\8\7\s\a\h\b\t\6\i\u\v\a\0\x\w\w\9\v\3\j\i\x\3\c\p\k\5\k\6\y\b\s\2\8\d\i\h\4\b\o\i\6\8\u\7\s\m\6\o\9\a\d\l\w\4\x\4\h\0\d\9\o\y\5\f\e\4\m\o\w\z\j\d\p\k\p\e\1\i\s\q\b\l\6\f\7\c\m\f\0\r\1\7\i\d\k\y\i\3\t\x\8\e\n\n\t\4\v\s\6\1\e\f\t\6\7\d\b\9\v\g\c\e\2\b\y\l\z\c\w\i\e\m\0\t\j\w\8\b\e\k\h\w\h\v\e\0\l\y\l\9\o\y\m\l\p\t\9\g\2\i\p\q\d\1\u\4\7\g\x\6\x\z\m\r\k\d\f\j\n\j\4\d\a\n\i\h\f\s\t\i\d\e\e\1\p\7\c\h\7\0\c\t\7\2\6\g\p\x\i\g\y\9\n\p\y\9\a\0\b\c\p\j\m\c\4\i\n\x\h\f\6\l\9\g\k\g\3\g\z\l\1\p\d\q\3\q\p\y\6\n\n\p\w\3\n\z\o\y\4\h\n\d\7\r\t\h\6\h\v\0\e\7\b\c\j\4\y\7\2\g\f\r\o\e\c\b\a\p\g\m\q\d\n\q\d\2\8\y\o\i\w\z\s\u\h\2\s\z\x\e\8\g\l\t\o\d\r\p\5\c\z\g\3\n\n\n\3\t\c\s\5\p\c\4\y\4\k\e\k\s\1\i\t\1\m\g\j\a\t\0\h\b\q\3\v\g\u\x\t\q\t\8\f\t\x\v\x\g\m\w\l\7\o\e\y\9\4\q\c\7\k\5\p\o\8\i\d\g\a\5\9\m\s\f\1\r\f\v\t\c\5\0\u\u\4\1\7\a\8\z\7\a\f\g\p\l\q\l\1\7\s\q\z\m\d\0\6\3\z\g\x\v\h\k\6\y\6\v\v\1\z\e\n\l\e\x\g\e\f\3\u\u\i ]] 00:06:43.280 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.280 07:32:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:43.280 [2024-07-26 07:32:08.802593] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:43.280 [2024-07-26 07:32:08.802680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62783 ] 00:06:43.539 [2024-07-26 07:32:08.938950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.539 [2024-07-26 07:32:09.044216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.539 [2024-07-26 07:32:09.121315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.055  Copying: 512/512 [B] (average 500 kBps) 00:06:44.055 00:06:44.055 07:32:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j6yts2g7jd1705ukigkladp9q3rv2yacvozcbht98by29yf54fxwn1woa387sahbt6iuva0xww9v3jix3cpk5k6ybs28dih4boi68u7sm6o9adlw4x4h0d9oy5fe4mowzjdpkpe1isqbl6f7cmf0r17idkyi3tx8ennt4vs61eft67db9vgce2bylzcwiem0tjw8bekhwhve0lyl9oymlpt9g2ipqd1u47gx6xzmrkdfjnj4danihfstidee1p7ch70ct726gpxigy9npy9a0bcpjmc4inxhf6l9gkg3gzl1pdq3qpy6nnpw3nzoy4hnd7rth6hv0e7bcj4y72gfroecbapgmqdnqd28yoiwzsuh2szxe8gltodrp5czg3nnn3tcs5pc4y4keks1it1mgjat0hbq3vguxtqt8ftxvxgmwl7oey94qc7k5po8idga59msf1rfvtc50uu417a8z7afgplql17sqzmd063zgxvhk6y6vv1zenlexgef3uui == \j\6\y\t\s\2\g\7\j\d\1\7\0\5\u\k\i\g\k\l\a\d\p\9\q\3\r\v\2\y\a\c\v\o\z\c\b\h\t\9\8\b\y\2\9\y\f\5\4\f\x\w\n\1\w\o\a\3\8\7\s\a\h\b\t\6\i\u\v\a\0\x\w\w\9\v\3\j\i\x\3\c\p\k\5\k\6\y\b\s\2\8\d\i\h\4\b\o\i\6\8\u\7\s\m\6\o\9\a\d\l\w\4\x\4\h\0\d\9\o\y\5\f\e\4\m\o\w\z\j\d\p\k\p\e\1\i\s\q\b\l\6\f\7\c\m\f\0\r\1\7\i\d\k\y\i\3\t\x\8\e\n\n\t\4\v\s\6\1\e\f\t\6\7\d\b\9\v\g\c\e\2\b\y\l\z\c\w\i\e\m\0\t\j\w\8\b\e\k\h\w\h\v\e\0\l\y\l\9\o\y\m\l\p\t\9\g\2\i\p\q\d\1\u\4\7\g\x\6\x\z\m\r\k\d\f\j\n\j\4\d\a\n\i\h\f\s\t\i\d\e\e\1\p\7\c\h\7\0\c\t\7\2\6\g\p\x\i\g\y\9\n\p\y\9\a\0\b\c\p\j\m\c\4\i\n\x\h\f\6\l\9\g\k\g\3\g\z\l\1\p\d\q\3\q\p\y\6\n\n\p\w\3\n\z\o\y\4\h\n\d\7\r\t\h\6\h\v\0\e\7\b\c\j\4\y\7\2\g\f\r\o\e\c\b\a\p\g\m\q\d\n\q\d\2\8\y\o\i\w\z\s\u\h\2\s\z\x\e\8\g\l\t\o\d\r\p\5\c\z\g\3\n\n\n\3\t\c\s\5\p\c\4\y\4\k\e\k\s\1\i\t\1\m\g\j\a\t\0\h\b\q\3\v\g\u\x\t\q\t\8\f\t\x\v\x\g\m\w\l\7\o\e\y\9\4\q\c\7\k\5\p\o\8\i\d\g\a\5\9\m\s\f\1\r\f\v\t\c\5\0\u\u\4\1\7\a\8\z\7\a\f\g\p\l\q\l\1\7\s\q\z\m\d\0\6\3\z\g\x\v\h\k\6\y\6\v\v\1\z\e\n\l\e\x\g\e\f\3\u\u\i ]] 00:06:44.055 07:32:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.055 07:32:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:44.055 [2024-07-26 07:32:09.569209] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:44.055 [2024-07-26 07:32:09.569317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62785 ] 00:06:44.314 [2024-07-26 07:32:09.706299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.314 [2024-07-26 07:32:09.849714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.572 [2024-07-26 07:32:09.925298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.831  Copying: 512/512 [B] (average 166 kBps) 00:06:44.831 00:06:44.831 07:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j6yts2g7jd1705ukigkladp9q3rv2yacvozcbht98by29yf54fxwn1woa387sahbt6iuva0xww9v3jix3cpk5k6ybs28dih4boi68u7sm6o9adlw4x4h0d9oy5fe4mowzjdpkpe1isqbl6f7cmf0r17idkyi3tx8ennt4vs61eft67db9vgce2bylzcwiem0tjw8bekhwhve0lyl9oymlpt9g2ipqd1u47gx6xzmrkdfjnj4danihfstidee1p7ch70ct726gpxigy9npy9a0bcpjmc4inxhf6l9gkg3gzl1pdq3qpy6nnpw3nzoy4hnd7rth6hv0e7bcj4y72gfroecbapgmqdnqd28yoiwzsuh2szxe8gltodrp5czg3nnn3tcs5pc4y4keks1it1mgjat0hbq3vguxtqt8ftxvxgmwl7oey94qc7k5po8idga59msf1rfvtc50uu417a8z7afgplql17sqzmd063zgxvhk6y6vv1zenlexgef3uui == \j\6\y\t\s\2\g\7\j\d\1\7\0\5\u\k\i\g\k\l\a\d\p\9\q\3\r\v\2\y\a\c\v\o\z\c\b\h\t\9\8\b\y\2\9\y\f\5\4\f\x\w\n\1\w\o\a\3\8\7\s\a\h\b\t\6\i\u\v\a\0\x\w\w\9\v\3\j\i\x\3\c\p\k\5\k\6\y\b\s\2\8\d\i\h\4\b\o\i\6\8\u\7\s\m\6\o\9\a\d\l\w\4\x\4\h\0\d\9\o\y\5\f\e\4\m\o\w\z\j\d\p\k\p\e\1\i\s\q\b\l\6\f\7\c\m\f\0\r\1\7\i\d\k\y\i\3\t\x\8\e\n\n\t\4\v\s\6\1\e\f\t\6\7\d\b\9\v\g\c\e\2\b\y\l\z\c\w\i\e\m\0\t\j\w\8\b\e\k\h\w\h\v\e\0\l\y\l\9\o\y\m\l\p\t\9\g\2\i\p\q\d\1\u\4\7\g\x\6\x\z\m\r\k\d\f\j\n\j\4\d\a\n\i\h\f\s\t\i\d\e\e\1\p\7\c\h\7\0\c\t\7\2\6\g\p\x\i\g\y\9\n\p\y\9\a\0\b\c\p\j\m\c\4\i\n\x\h\f\6\l\9\g\k\g\3\g\z\l\1\p\d\q\3\q\p\y\6\n\n\p\w\3\n\z\o\y\4\h\n\d\7\r\t\h\6\h\v\0\e\7\b\c\j\4\y\7\2\g\f\r\o\e\c\b\a\p\g\m\q\d\n\q\d\2\8\y\o\i\w\z\s\u\h\2\s\z\x\e\8\g\l\t\o\d\r\p\5\c\z\g\3\n\n\n\3\t\c\s\5\p\c\4\y\4\k\e\k\s\1\i\t\1\m\g\j\a\t\0\h\b\q\3\v\g\u\x\t\q\t\8\f\t\x\v\x\g\m\w\l\7\o\e\y\9\4\q\c\7\k\5\p\o\8\i\d\g\a\5\9\m\s\f\1\r\f\v\t\c\5\0\u\u\4\1\7\a\8\z\7\a\f\g\p\l\q\l\1\7\s\q\z\m\d\0\6\3\z\g\x\v\h\k\6\y\6\v\v\1\z\e\n\l\e\x\g\e\f\3\u\u\i ]] 00:06:44.831 07:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.831 07:32:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:44.831 [2024-07-26 07:32:10.363084] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:44.831 [2024-07-26 07:32:10.363199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62798 ] 00:06:45.092 [2024-07-26 07:32:10.502129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.092 [2024-07-26 07:32:10.616834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.092 [2024-07-26 07:32:10.692283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.607  Copying: 512/512 [B] (average 250 kBps) 00:06:45.607 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j6yts2g7jd1705ukigkladp9q3rv2yacvozcbht98by29yf54fxwn1woa387sahbt6iuva0xww9v3jix3cpk5k6ybs28dih4boi68u7sm6o9adlw4x4h0d9oy5fe4mowzjdpkpe1isqbl6f7cmf0r17idkyi3tx8ennt4vs61eft67db9vgce2bylzcwiem0tjw8bekhwhve0lyl9oymlpt9g2ipqd1u47gx6xzmrkdfjnj4danihfstidee1p7ch70ct726gpxigy9npy9a0bcpjmc4inxhf6l9gkg3gzl1pdq3qpy6nnpw3nzoy4hnd7rth6hv0e7bcj4y72gfroecbapgmqdnqd28yoiwzsuh2szxe8gltodrp5czg3nnn3tcs5pc4y4keks1it1mgjat0hbq3vguxtqt8ftxvxgmwl7oey94qc7k5po8idga59msf1rfvtc50uu417a8z7afgplql17sqzmd063zgxvhk6y6vv1zenlexgef3uui == \j\6\y\t\s\2\g\7\j\d\1\7\0\5\u\k\i\g\k\l\a\d\p\9\q\3\r\v\2\y\a\c\v\o\z\c\b\h\t\9\8\b\y\2\9\y\f\5\4\f\x\w\n\1\w\o\a\3\8\7\s\a\h\b\t\6\i\u\v\a\0\x\w\w\9\v\3\j\i\x\3\c\p\k\5\k\6\y\b\s\2\8\d\i\h\4\b\o\i\6\8\u\7\s\m\6\o\9\a\d\l\w\4\x\4\h\0\d\9\o\y\5\f\e\4\m\o\w\z\j\d\p\k\p\e\1\i\s\q\b\l\6\f\7\c\m\f\0\r\1\7\i\d\k\y\i\3\t\x\8\e\n\n\t\4\v\s\6\1\e\f\t\6\7\d\b\9\v\g\c\e\2\b\y\l\z\c\w\i\e\m\0\t\j\w\8\b\e\k\h\w\h\v\e\0\l\y\l\9\o\y\m\l\p\t\9\g\2\i\p\q\d\1\u\4\7\g\x\6\x\z\m\r\k\d\f\j\n\j\4\d\a\n\i\h\f\s\t\i\d\e\e\1\p\7\c\h\7\0\c\t\7\2\6\g\p\x\i\g\y\9\n\p\y\9\a\0\b\c\p\j\m\c\4\i\n\x\h\f\6\l\9\g\k\g\3\g\z\l\1\p\d\q\3\q\p\y\6\n\n\p\w\3\n\z\o\y\4\h\n\d\7\r\t\h\6\h\v\0\e\7\b\c\j\4\y\7\2\g\f\r\o\e\c\b\a\p\g\m\q\d\n\q\d\2\8\y\o\i\w\z\s\u\h\2\s\z\x\e\8\g\l\t\o\d\r\p\5\c\z\g\3\n\n\n\3\t\c\s\5\p\c\4\y\4\k\e\k\s\1\i\t\1\m\g\j\a\t\0\h\b\q\3\v\g\u\x\t\q\t\8\f\t\x\v\x\g\m\w\l\7\o\e\y\9\4\q\c\7\k\5\p\o\8\i\d\g\a\5\9\m\s\f\1\r\f\v\t\c\5\0\u\u\4\1\7\a\8\z\7\a\f\g\p\l\q\l\1\7\s\q\z\m\d\0\6\3\z\g\x\v\h\k\6\y\6\v\v\1\z\e\n\l\e\x\g\e\f\3\u\u\i ]] 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:45.607 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:45.607 [2024-07-26 07:32:11.153192] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:45.607 [2024-07-26 07:32:11.153295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:06:45.865 [2024-07-26 07:32:11.291145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.865 [2024-07-26 07:32:11.428872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.124 [2024-07-26 07:32:11.504525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.383  Copying: 512/512 [B] (average 500 kBps) 00:06:46.383 00:06:46.383 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75j6hbkzfb05xhb2zrmo0k8iy53dcqynbkdlcgb43jhw6k41wmw1mmlvi6odezah1vnm6hltpxy8590vycl7o7kzm01izrooa331ce37d4w5n9cnx1p802jqf7x4lyp4hwf8f0iumr6jjnwlic7um4umw7ibyqomeispaws2zvfchodyiwzmu4fh1v0pfuwlbbrj430735bhv0k1kgzw67rar9k5ct4x4srt8s70varszh7akzr22c7zagtzmzuidlnejwhsg03azkvexg3fg90przhzuy5aypoae2ibvjv0kba144yxzyiin8ayqc3fnqkh4whytjod9zi469meiarnluqy3h238qcu5rvew6q6d1vgqg9l2eaxbqail91cve1li66kqt2qeqeruhktt9o8ikr2jk7c133oixab0yoqek940h73tnj4w4pofhmz8fs85ej80kjn7goxulpuao2sp24frpys291dkj9g55m4j0i8jz02ztc1b5xskhmk == \7\5\j\6\h\b\k\z\f\b\0\5\x\h\b\2\z\r\m\o\0\k\8\i\y\5\3\d\c\q\y\n\b\k\d\l\c\g\b\4\3\j\h\w\6\k\4\1\w\m\w\1\m\m\l\v\i\6\o\d\e\z\a\h\1\v\n\m\6\h\l\t\p\x\y\8\5\9\0\v\y\c\l\7\o\7\k\z\m\0\1\i\z\r\o\o\a\3\3\1\c\e\3\7\d\4\w\5\n\9\c\n\x\1\p\8\0\2\j\q\f\7\x\4\l\y\p\4\h\w\f\8\f\0\i\u\m\r\6\j\j\n\w\l\i\c\7\u\m\4\u\m\w\7\i\b\y\q\o\m\e\i\s\p\a\w\s\2\z\v\f\c\h\o\d\y\i\w\z\m\u\4\f\h\1\v\0\p\f\u\w\l\b\b\r\j\4\3\0\7\3\5\b\h\v\0\k\1\k\g\z\w\6\7\r\a\r\9\k\5\c\t\4\x\4\s\r\t\8\s\7\0\v\a\r\s\z\h\7\a\k\z\r\2\2\c\7\z\a\g\t\z\m\z\u\i\d\l\n\e\j\w\h\s\g\0\3\a\z\k\v\e\x\g\3\f\g\9\0\p\r\z\h\z\u\y\5\a\y\p\o\a\e\2\i\b\v\j\v\0\k\b\a\1\4\4\y\x\z\y\i\i\n\8\a\y\q\c\3\f\n\q\k\h\4\w\h\y\t\j\o\d\9\z\i\4\6\9\m\e\i\a\r\n\l\u\q\y\3\h\2\3\8\q\c\u\5\r\v\e\w\6\q\6\d\1\v\g\q\g\9\l\2\e\a\x\b\q\a\i\l\9\1\c\v\e\1\l\i\6\6\k\q\t\2\q\e\q\e\r\u\h\k\t\t\9\o\8\i\k\r\2\j\k\7\c\1\3\3\o\i\x\a\b\0\y\o\q\e\k\9\4\0\h\7\3\t\n\j\4\w\4\p\o\f\h\m\z\8\f\s\8\5\e\j\8\0\k\j\n\7\g\o\x\u\l\p\u\a\o\2\s\p\2\4\f\r\p\y\s\2\9\1\d\k\j\9\g\5\5\m\4\j\0\i\8\j\z\0\2\z\t\c\1\b\5\x\s\k\h\m\k ]] 00:06:46.383 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.383 07:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:46.383 [2024-07-26 07:32:11.911588] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:46.383 [2024-07-26 07:32:11.911686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62823 ] 00:06:46.641 [2024-07-26 07:32:12.049717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.641 [2024-07-26 07:32:12.150283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.641 [2024-07-26 07:32:12.226876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.157  Copying: 512/512 [B] (average 500 kBps) 00:06:47.157 00:06:47.157 07:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75j6hbkzfb05xhb2zrmo0k8iy53dcqynbkdlcgb43jhw6k41wmw1mmlvi6odezah1vnm6hltpxy8590vycl7o7kzm01izrooa331ce37d4w5n9cnx1p802jqf7x4lyp4hwf8f0iumr6jjnwlic7um4umw7ibyqomeispaws2zvfchodyiwzmu4fh1v0pfuwlbbrj430735bhv0k1kgzw67rar9k5ct4x4srt8s70varszh7akzr22c7zagtzmzuidlnejwhsg03azkvexg3fg90przhzuy5aypoae2ibvjv0kba144yxzyiin8ayqc3fnqkh4whytjod9zi469meiarnluqy3h238qcu5rvew6q6d1vgqg9l2eaxbqail91cve1li66kqt2qeqeruhktt9o8ikr2jk7c133oixab0yoqek940h73tnj4w4pofhmz8fs85ej80kjn7goxulpuao2sp24frpys291dkj9g55m4j0i8jz02ztc1b5xskhmk == \7\5\j\6\h\b\k\z\f\b\0\5\x\h\b\2\z\r\m\o\0\k\8\i\y\5\3\d\c\q\y\n\b\k\d\l\c\g\b\4\3\j\h\w\6\k\4\1\w\m\w\1\m\m\l\v\i\6\o\d\e\z\a\h\1\v\n\m\6\h\l\t\p\x\y\8\5\9\0\v\y\c\l\7\o\7\k\z\m\0\1\i\z\r\o\o\a\3\3\1\c\e\3\7\d\4\w\5\n\9\c\n\x\1\p\8\0\2\j\q\f\7\x\4\l\y\p\4\h\w\f\8\f\0\i\u\m\r\6\j\j\n\w\l\i\c\7\u\m\4\u\m\w\7\i\b\y\q\o\m\e\i\s\p\a\w\s\2\z\v\f\c\h\o\d\y\i\w\z\m\u\4\f\h\1\v\0\p\f\u\w\l\b\b\r\j\4\3\0\7\3\5\b\h\v\0\k\1\k\g\z\w\6\7\r\a\r\9\k\5\c\t\4\x\4\s\r\t\8\s\7\0\v\a\r\s\z\h\7\a\k\z\r\2\2\c\7\z\a\g\t\z\m\z\u\i\d\l\n\e\j\w\h\s\g\0\3\a\z\k\v\e\x\g\3\f\g\9\0\p\r\z\h\z\u\y\5\a\y\p\o\a\e\2\i\b\v\j\v\0\k\b\a\1\4\4\y\x\z\y\i\i\n\8\a\y\q\c\3\f\n\q\k\h\4\w\h\y\t\j\o\d\9\z\i\4\6\9\m\e\i\a\r\n\l\u\q\y\3\h\2\3\8\q\c\u\5\r\v\e\w\6\q\6\d\1\v\g\q\g\9\l\2\e\a\x\b\q\a\i\l\9\1\c\v\e\1\l\i\6\6\k\q\t\2\q\e\q\e\r\u\h\k\t\t\9\o\8\i\k\r\2\j\k\7\c\1\3\3\o\i\x\a\b\0\y\o\q\e\k\9\4\0\h\7\3\t\n\j\4\w\4\p\o\f\h\m\z\8\f\s\8\5\e\j\8\0\k\j\n\7\g\o\x\u\l\p\u\a\o\2\s\p\2\4\f\r\p\y\s\2\9\1\d\k\j\9\g\5\5\m\4\j\0\i\8\j\z\0\2\z\t\c\1\b\5\x\s\k\h\m\k ]] 00:06:47.157 07:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.157 07:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:47.157 [2024-07-26 07:32:12.647233] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:47.158 [2024-07-26 07:32:12.647332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62826 ] 00:06:47.416 [2024-07-26 07:32:12.788904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.416 [2024-07-26 07:32:12.938029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.416 [2024-07-26 07:32:13.014440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.931  Copying: 512/512 [B] (average 500 kBps) 00:06:47.931 00:06:47.931 07:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75j6hbkzfb05xhb2zrmo0k8iy53dcqynbkdlcgb43jhw6k41wmw1mmlvi6odezah1vnm6hltpxy8590vycl7o7kzm01izrooa331ce37d4w5n9cnx1p802jqf7x4lyp4hwf8f0iumr6jjnwlic7um4umw7ibyqomeispaws2zvfchodyiwzmu4fh1v0pfuwlbbrj430735bhv0k1kgzw67rar9k5ct4x4srt8s70varszh7akzr22c7zagtzmzuidlnejwhsg03azkvexg3fg90przhzuy5aypoae2ibvjv0kba144yxzyiin8ayqc3fnqkh4whytjod9zi469meiarnluqy3h238qcu5rvew6q6d1vgqg9l2eaxbqail91cve1li66kqt2qeqeruhktt9o8ikr2jk7c133oixab0yoqek940h73tnj4w4pofhmz8fs85ej80kjn7goxulpuao2sp24frpys291dkj9g55m4j0i8jz02ztc1b5xskhmk == \7\5\j\6\h\b\k\z\f\b\0\5\x\h\b\2\z\r\m\o\0\k\8\i\y\5\3\d\c\q\y\n\b\k\d\l\c\g\b\4\3\j\h\w\6\k\4\1\w\m\w\1\m\m\l\v\i\6\o\d\e\z\a\h\1\v\n\m\6\h\l\t\p\x\y\8\5\9\0\v\y\c\l\7\o\7\k\z\m\0\1\i\z\r\o\o\a\3\3\1\c\e\3\7\d\4\w\5\n\9\c\n\x\1\p\8\0\2\j\q\f\7\x\4\l\y\p\4\h\w\f\8\f\0\i\u\m\r\6\j\j\n\w\l\i\c\7\u\m\4\u\m\w\7\i\b\y\q\o\m\e\i\s\p\a\w\s\2\z\v\f\c\h\o\d\y\i\w\z\m\u\4\f\h\1\v\0\p\f\u\w\l\b\b\r\j\4\3\0\7\3\5\b\h\v\0\k\1\k\g\z\w\6\7\r\a\r\9\k\5\c\t\4\x\4\s\r\t\8\s\7\0\v\a\r\s\z\h\7\a\k\z\r\2\2\c\7\z\a\g\t\z\m\z\u\i\d\l\n\e\j\w\h\s\g\0\3\a\z\k\v\e\x\g\3\f\g\9\0\p\r\z\h\z\u\y\5\a\y\p\o\a\e\2\i\b\v\j\v\0\k\b\a\1\4\4\y\x\z\y\i\i\n\8\a\y\q\c\3\f\n\q\k\h\4\w\h\y\t\j\o\d\9\z\i\4\6\9\m\e\i\a\r\n\l\u\q\y\3\h\2\3\8\q\c\u\5\r\v\e\w\6\q\6\d\1\v\g\q\g\9\l\2\e\a\x\b\q\a\i\l\9\1\c\v\e\1\l\i\6\6\k\q\t\2\q\e\q\e\r\u\h\k\t\t\9\o\8\i\k\r\2\j\k\7\c\1\3\3\o\i\x\a\b\0\y\o\q\e\k\9\4\0\h\7\3\t\n\j\4\w\4\p\o\f\h\m\z\8\f\s\8\5\e\j\8\0\k\j\n\7\g\o\x\u\l\p\u\a\o\2\s\p\2\4\f\r\p\y\s\2\9\1\d\k\j\9\g\5\5\m\4\j\0\i\8\j\z\0\2\z\t\c\1\b\5\x\s\k\h\m\k ]] 00:06:47.931 07:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.931 07:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:47.931 [2024-07-26 07:32:13.441892] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:47.932 [2024-07-26 07:32:13.441993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:06:48.188 [2024-07-26 07:32:13.581489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.188 [2024-07-26 07:32:13.686881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.188 [2024-07-26 07:32:13.760952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.703  Copying: 512/512 [B] (average 250 kBps) 00:06:48.703 00:06:48.703 07:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75j6hbkzfb05xhb2zrmo0k8iy53dcqynbkdlcgb43jhw6k41wmw1mmlvi6odezah1vnm6hltpxy8590vycl7o7kzm01izrooa331ce37d4w5n9cnx1p802jqf7x4lyp4hwf8f0iumr6jjnwlic7um4umw7ibyqomeispaws2zvfchodyiwzmu4fh1v0pfuwlbbrj430735bhv0k1kgzw67rar9k5ct4x4srt8s70varszh7akzr22c7zagtzmzuidlnejwhsg03azkvexg3fg90przhzuy5aypoae2ibvjv0kba144yxzyiin8ayqc3fnqkh4whytjod9zi469meiarnluqy3h238qcu5rvew6q6d1vgqg9l2eaxbqail91cve1li66kqt2qeqeruhktt9o8ikr2jk7c133oixab0yoqek940h73tnj4w4pofhmz8fs85ej80kjn7goxulpuao2sp24frpys291dkj9g55m4j0i8jz02ztc1b5xskhmk == \7\5\j\6\h\b\k\z\f\b\0\5\x\h\b\2\z\r\m\o\0\k\8\i\y\5\3\d\c\q\y\n\b\k\d\l\c\g\b\4\3\j\h\w\6\k\4\1\w\m\w\1\m\m\l\v\i\6\o\d\e\z\a\h\1\v\n\m\6\h\l\t\p\x\y\8\5\9\0\v\y\c\l\7\o\7\k\z\m\0\1\i\z\r\o\o\a\3\3\1\c\e\3\7\d\4\w\5\n\9\c\n\x\1\p\8\0\2\j\q\f\7\x\4\l\y\p\4\h\w\f\8\f\0\i\u\m\r\6\j\j\n\w\l\i\c\7\u\m\4\u\m\w\7\i\b\y\q\o\m\e\i\s\p\a\w\s\2\z\v\f\c\h\o\d\y\i\w\z\m\u\4\f\h\1\v\0\p\f\u\w\l\b\b\r\j\4\3\0\7\3\5\b\h\v\0\k\1\k\g\z\w\6\7\r\a\r\9\k\5\c\t\4\x\4\s\r\t\8\s\7\0\v\a\r\s\z\h\7\a\k\z\r\2\2\c\7\z\a\g\t\z\m\z\u\i\d\l\n\e\j\w\h\s\g\0\3\a\z\k\v\e\x\g\3\f\g\9\0\p\r\z\h\z\u\y\5\a\y\p\o\a\e\2\i\b\v\j\v\0\k\b\a\1\4\4\y\x\z\y\i\i\n\8\a\y\q\c\3\f\n\q\k\h\4\w\h\y\t\j\o\d\9\z\i\4\6\9\m\e\i\a\r\n\l\u\q\y\3\h\2\3\8\q\c\u\5\r\v\e\w\6\q\6\d\1\v\g\q\g\9\l\2\e\a\x\b\q\a\i\l\9\1\c\v\e\1\l\i\6\6\k\q\t\2\q\e\q\e\r\u\h\k\t\t\9\o\8\i\k\r\2\j\k\7\c\1\3\3\o\i\x\a\b\0\y\o\q\e\k\9\4\0\h\7\3\t\n\j\4\w\4\p\o\f\h\m\z\8\f\s\8\5\e\j\8\0\k\j\n\7\g\o\x\u\l\p\u\a\o\2\s\p\2\4\f\r\p\y\s\2\9\1\d\k\j\9\g\5\5\m\4\j\0\i\8\j\z\0\2\z\t\c\1\b\5\x\s\k\h\m\k ]] 00:06:48.703 00:06:48.703 real 0m6.105s 00:06:48.703 user 0m3.610s 00:06:48.704 sys 0m1.510s 00:06:48.704 ************************************ 00:06:48.704 END TEST dd_flags_misc_forced_aio 00:06:48.704 ************************************ 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:48.704 ************************************ 00:06:48.704 END TEST spdk_dd_posix 00:06:48.704 ************************************ 00:06:48.704 00:06:48.704 real 0m27.718s 00:06:48.704 user 0m15.185s 00:06:48.704 sys 0m9.080s 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.704 07:32:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:48.704 07:32:14 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:48.704 07:32:14 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.704 07:32:14 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.704 07:32:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:48.704 ************************************ 00:06:48.704 START TEST spdk_dd_malloc 00:06:48.704 ************************************ 00:06:48.704 07:32:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:48.704 * Looking for test storage... 00:06:48.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:48.704 07:32:14 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:48.961 ************************************ 00:06:48.961 START TEST dd_malloc_copy 00:06:48.961 ************************************ 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.961 07:32:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.961 [2024-07-26 07:32:14.373155] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:48.961 [2024-07-26 07:32:14.373895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62913 ] 00:06:48.961 { 00:06:48.961 "subsystems": [ 00:06:48.961 { 00:06:48.961 "subsystem": "bdev", 00:06:48.961 "config": [ 00:06:48.961 { 00:06:48.961 "params": { 00:06:48.961 "block_size": 512, 00:06:48.961 "num_blocks": 1048576, 00:06:48.961 "name": "malloc0" 00:06:48.961 }, 00:06:48.961 "method": "bdev_malloc_create" 00:06:48.961 }, 00:06:48.961 { 00:06:48.961 "params": { 00:06:48.961 "block_size": 512, 00:06:48.961 "num_blocks": 1048576, 00:06:48.961 "name": "malloc1" 00:06:48.961 }, 00:06:48.961 "method": "bdev_malloc_create" 00:06:48.961 }, 00:06:48.961 { 00:06:48.961 "method": "bdev_wait_for_examine" 00:06:48.961 } 00:06:48.961 ] 00:06:48.961 } 00:06:48.961 ] 00:06:48.961 } 00:06:48.961 [2024-07-26 07:32:14.512437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.219 [2024-07-26 07:32:14.660481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.219 [2024-07-26 07:32:14.736085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.156  Copying: 205/512 [MB] (205 MBps) Copying: 403/512 [MB] (198 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:06:53.156 00:06:53.156 07:32:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:53.156 07:32:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:53.156 07:32:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.156 07:32:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.156 [2024-07-26 07:32:18.646403] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:53.156 [2024-07-26 07:32:18.646517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62966 ] 00:06:53.156 { 00:06:53.156 "subsystems": [ 00:06:53.156 { 00:06:53.156 "subsystem": "bdev", 00:06:53.156 "config": [ 00:06:53.156 { 00:06:53.156 "params": { 00:06:53.156 "block_size": 512, 00:06:53.156 "num_blocks": 1048576, 00:06:53.156 "name": "malloc0" 00:06:53.156 }, 00:06:53.156 "method": "bdev_malloc_create" 00:06:53.156 }, 00:06:53.156 { 00:06:53.156 "params": { 00:06:53.156 "block_size": 512, 00:06:53.156 "num_blocks": 1048576, 00:06:53.156 "name": "malloc1" 00:06:53.156 }, 00:06:53.156 "method": "bdev_malloc_create" 00:06:53.156 }, 00:06:53.156 { 00:06:53.156 "method": "bdev_wait_for_examine" 00:06:53.156 } 00:06:53.156 ] 00:06:53.156 } 00:06:53.156 ] 00:06:53.156 } 00:06:53.414 [2024-07-26 07:32:18.781322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.414 [2024-07-26 07:32:18.888898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.414 [2024-07-26 07:32:18.964725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.381  Copying: 199/512 [MB] (199 MBps) Copying: 402/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 200 MBps) 00:06:57.381 00:06:57.381 00:06:57.381 real 0m8.537s 00:06:57.381 user 0m7.222s 00:06:57.381 sys 0m1.152s 00:06:57.381 ************************************ 00:06:57.381 END TEST dd_malloc_copy 00:06:57.381 ************************************ 00:06:57.381 07:32:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.381 07:32:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.381 ************************************ 00:06:57.381 END TEST spdk_dd_malloc 00:06:57.381 ************************************ 00:06:57.381 00:06:57.381 real 0m8.679s 00:06:57.381 user 0m7.289s 00:06:57.381 sys 0m1.228s 00:06:57.381 07:32:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.381 07:32:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:57.381 07:32:22 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:57.381 07:32:22 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:57.381 07:32:22 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.381 07:32:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:57.381 ************************************ 00:06:57.381 START TEST spdk_dd_bdev_to_bdev 00:06:57.381 ************************************ 00:06:57.381 07:32:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:57.640 * Looking for test storage... 00:06:57.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:57.640 ************************************ 00:06:57.640 START TEST dd_inflate_file 00:06:57.640 ************************************ 00:06:57.640 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:57.640 [2024-07-26 07:32:23.098052] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:57.640 [2024-07-26 07:32:23.098153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63082 ] 00:06:57.640 [2024-07-26 07:32:23.235638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.903 [2024-07-26 07:32:23.371020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.903 [2024-07-26 07:32:23.445996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.419  Copying: 64/64 [MB] (average 1488 MBps) 00:06:58.419 00:06:58.419 ************************************ 00:06:58.419 END TEST dd_inflate_file 00:06:58.419 ************************************ 00:06:58.419 00:06:58.419 real 0m0.782s 00:06:58.419 user 0m0.486s 00:06:58.419 sys 0m0.388s 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.419 ************************************ 00:06:58.419 START TEST dd_copy_to_out_bdev 00:06:58.419 ************************************ 00:06:58.419 07:32:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:58.419 { 00:06:58.419 "subsystems": [ 00:06:58.419 { 00:06:58.419 "subsystem": "bdev", 00:06:58.419 "config": [ 00:06:58.419 { 00:06:58.419 "params": { 00:06:58.419 "trtype": "pcie", 00:06:58.419 "traddr": "0000:00:10.0", 00:06:58.419 "name": "Nvme0" 00:06:58.419 }, 00:06:58.419 "method": "bdev_nvme_attach_controller" 00:06:58.419 }, 00:06:58.419 { 00:06:58.419 "params": { 00:06:58.419 "trtype": "pcie", 00:06:58.419 "traddr": "0000:00:11.0", 00:06:58.419 "name": "Nvme1" 00:06:58.419 }, 00:06:58.419 "method": "bdev_nvme_attach_controller" 00:06:58.419 }, 00:06:58.419 { 00:06:58.419 "method": "bdev_wait_for_examine" 00:06:58.419 } 00:06:58.419 ] 00:06:58.419 } 00:06:58.419 ] 00:06:58.419 } 00:06:58.419 [2024-07-26 07:32:23.942784] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:06:58.419 [2024-07-26 07:32:23.942912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63115 ] 00:06:58.678 [2024-07-26 07:32:24.078207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.678 [2024-07-26 07:32:24.179284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.678 [2024-07-26 07:32:24.253661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.621  Copying: 54/64 [MB] (54 MBps) Copying: 64/64 [MB] (average 54 MBps) 00:07:00.621 00:07:00.621 00:07:00.621 real 0m2.068s 00:07:00.621 user 0m1.786s 00:07:00.621 sys 0m1.639s 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.621 ************************************ 00:07:00.621 END TEST dd_copy_to_out_bdev 00:07:00.621 ************************************ 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.621 07:32:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.621 ************************************ 00:07:00.621 START TEST dd_offset_magic 00:07:00.621 ************************************ 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:00.621 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:00.622 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:00.622 07:32:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:00.622 [2024-07-26 07:32:26.067597] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:00.622 [2024-07-26 07:32:26.067691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63160 ] 00:07:00.622 { 00:07:00.622 "subsystems": [ 00:07:00.622 { 00:07:00.622 "subsystem": "bdev", 00:07:00.622 "config": [ 00:07:00.622 { 00:07:00.622 "params": { 00:07:00.622 "trtype": "pcie", 00:07:00.622 "traddr": "0000:00:10.0", 00:07:00.622 "name": "Nvme0" 00:07:00.622 }, 00:07:00.622 "method": "bdev_nvme_attach_controller" 00:07:00.622 }, 00:07:00.622 { 00:07:00.622 "params": { 00:07:00.622 "trtype": "pcie", 00:07:00.622 "traddr": "0000:00:11.0", 00:07:00.622 "name": "Nvme1" 00:07:00.622 }, 00:07:00.622 "method": "bdev_nvme_attach_controller" 00:07:00.622 }, 00:07:00.622 { 00:07:00.622 "method": "bdev_wait_for_examine" 00:07:00.622 } 00:07:00.622 ] 00:07:00.622 } 00:07:00.622 ] 00:07:00.622 } 00:07:00.622 [2024-07-26 07:32:26.205806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.881 [2024-07-26 07:32:26.342106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.881 [2024-07-26 07:32:26.416896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.707  Copying: 65/65 [MB] (average 866 MBps) 00:07:01.707 00:07:01.707 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:01.707 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:01.707 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:01.707 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:01.707 [2024-07-26 07:32:27.083284] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:01.707 [2024-07-26 07:32:27.083368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63182 ] 00:07:01.707 { 00:07:01.707 "subsystems": [ 00:07:01.707 { 00:07:01.707 "subsystem": "bdev", 00:07:01.707 "config": [ 00:07:01.707 { 00:07:01.707 "params": { 00:07:01.707 "trtype": "pcie", 00:07:01.707 "traddr": "0000:00:10.0", 00:07:01.707 "name": "Nvme0" 00:07:01.707 }, 00:07:01.707 "method": "bdev_nvme_attach_controller" 00:07:01.707 }, 00:07:01.707 { 00:07:01.707 "params": { 00:07:01.707 "trtype": "pcie", 00:07:01.707 "traddr": "0000:00:11.0", 00:07:01.707 "name": "Nvme1" 00:07:01.707 }, 00:07:01.707 "method": "bdev_nvme_attach_controller" 00:07:01.707 }, 00:07:01.707 { 00:07:01.707 "method": "bdev_wait_for_examine" 00:07:01.707 } 00:07:01.707 ] 00:07:01.707 } 00:07:01.707 ] 00:07:01.707 } 00:07:01.707 [2024-07-26 07:32:27.214349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.986 [2024-07-26 07:32:27.343679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.986 [2024-07-26 07:32:27.423095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.512  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:02.512 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:02.512 07:32:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:02.512 { 00:07:02.512 "subsystems": [ 00:07:02.512 { 00:07:02.512 "subsystem": "bdev", 00:07:02.512 "config": [ 00:07:02.512 { 00:07:02.512 "params": { 00:07:02.512 "trtype": "pcie", 00:07:02.512 "traddr": "0000:00:10.0", 00:07:02.512 "name": "Nvme0" 00:07:02.512 }, 00:07:02.512 "method": "bdev_nvme_attach_controller" 00:07:02.512 }, 00:07:02.512 { 00:07:02.512 "params": { 00:07:02.512 "trtype": "pcie", 00:07:02.512 "traddr": "0000:00:11.0", 00:07:02.512 "name": "Nvme1" 00:07:02.512 }, 00:07:02.512 "method": "bdev_nvme_attach_controller" 00:07:02.512 }, 00:07:02.512 { 00:07:02.512 "method": "bdev_wait_for_examine" 00:07:02.512 } 00:07:02.512 ] 00:07:02.512 } 00:07:02.512 ] 00:07:02.512 } 00:07:02.512 [2024-07-26 07:32:27.982389] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:02.512 [2024-07-26 07:32:27.982775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63204 ] 00:07:02.771 [2024-07-26 07:32:28.121935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.771 [2024-07-26 07:32:28.261415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.771 [2024-07-26 07:32:28.340940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.598  Copying: 65/65 [MB] (average 942 MBps) 00:07:03.598 00:07:03.598 07:32:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:03.598 07:32:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:03.598 07:32:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:03.598 07:32:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:03.598 [2024-07-26 07:32:29.025600] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:03.598 [2024-07-26 07:32:29.025682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63224 ] 00:07:03.598 { 00:07:03.598 "subsystems": [ 00:07:03.598 { 00:07:03.598 "subsystem": "bdev", 00:07:03.598 "config": [ 00:07:03.598 { 00:07:03.598 "params": { 00:07:03.598 "trtype": "pcie", 00:07:03.598 "traddr": "0000:00:10.0", 00:07:03.598 "name": "Nvme0" 00:07:03.598 }, 00:07:03.598 "method": "bdev_nvme_attach_controller" 00:07:03.598 }, 00:07:03.598 { 00:07:03.598 "params": { 00:07:03.598 "trtype": "pcie", 00:07:03.598 "traddr": "0000:00:11.0", 00:07:03.598 "name": "Nvme1" 00:07:03.598 }, 00:07:03.598 "method": "bdev_nvme_attach_controller" 00:07:03.598 }, 00:07:03.598 { 00:07:03.598 "method": "bdev_wait_for_examine" 00:07:03.598 } 00:07:03.598 ] 00:07:03.598 } 00:07:03.598 ] 00:07:03.598 } 00:07:03.598 [2024-07-26 07:32:29.161269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.857 [2024-07-26 07:32:29.290656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.857 [2024-07-26 07:32:29.371573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.375  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:04.375 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:04.375 00:07:04.375 real 0m3.866s 00:07:04.375 user 0m2.810s 00:07:04.375 sys 0m1.227s 00:07:04.375 ************************************ 00:07:04.375 END TEST dd_offset_magic 00:07:04.375 ************************************ 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:04.375 07:32:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.634 [2024-07-26 07:32:29.976853] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:04.634 [2024-07-26 07:32:29.976970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63261 ] 00:07:04.634 { 00:07:04.634 "subsystems": [ 00:07:04.634 { 00:07:04.634 "subsystem": "bdev", 00:07:04.634 "config": [ 00:07:04.634 { 00:07:04.634 "params": { 00:07:04.634 "trtype": "pcie", 00:07:04.634 "traddr": "0000:00:10.0", 00:07:04.634 "name": "Nvme0" 00:07:04.634 }, 00:07:04.634 "method": "bdev_nvme_attach_controller" 00:07:04.634 }, 00:07:04.634 { 00:07:04.634 "params": { 00:07:04.634 "trtype": "pcie", 00:07:04.634 "traddr": "0000:00:11.0", 00:07:04.634 "name": "Nvme1" 00:07:04.634 }, 00:07:04.634 "method": "bdev_nvme_attach_controller" 00:07:04.634 }, 00:07:04.634 { 00:07:04.634 "method": "bdev_wait_for_examine" 00:07:04.634 } 00:07:04.634 ] 00:07:04.634 } 00:07:04.634 ] 00:07:04.634 } 00:07:04.634 [2024-07-26 07:32:30.117124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.892 [2024-07-26 07:32:30.252416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.892 [2024-07-26 07:32:30.331115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.410  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:05.410 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:05.410 07:32:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:05.410 [2024-07-26 07:32:30.891216] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:05.410 [2024-07-26 07:32:30.891309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63277 ] 00:07:05.410 { 00:07:05.410 "subsystems": [ 00:07:05.410 { 00:07:05.410 "subsystem": "bdev", 00:07:05.410 "config": [ 00:07:05.410 { 00:07:05.410 "params": { 00:07:05.410 "trtype": "pcie", 00:07:05.410 "traddr": "0000:00:10.0", 00:07:05.410 "name": "Nvme0" 00:07:05.410 }, 00:07:05.410 "method": "bdev_nvme_attach_controller" 00:07:05.410 }, 00:07:05.410 { 00:07:05.410 "params": { 00:07:05.410 "trtype": "pcie", 00:07:05.410 "traddr": "0000:00:11.0", 00:07:05.410 "name": "Nvme1" 00:07:05.410 }, 00:07:05.410 "method": "bdev_nvme_attach_controller" 00:07:05.410 }, 00:07:05.410 { 00:07:05.410 "method": "bdev_wait_for_examine" 00:07:05.410 } 00:07:05.410 ] 00:07:05.410 } 00:07:05.410 ] 00:07:05.410 } 00:07:05.668 [2024-07-26 07:32:31.028037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.668 [2024-07-26 07:32:31.165561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.668 [2024-07-26 07:32:31.243411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.186  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:06.186 00:07:06.186 07:32:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:06.186 ************************************ 00:07:06.186 END TEST spdk_dd_bdev_to_bdev 00:07:06.186 ************************************ 00:07:06.186 00:07:06.186 real 0m8.834s 00:07:06.186 user 0m6.489s 00:07:06.186 sys 0m4.140s 00:07:06.186 07:32:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.186 07:32:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.445 07:32:31 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:06.445 07:32:31 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:06.445 07:32:31 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.445 07:32:31 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.445 07:32:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:06.445 ************************************ 00:07:06.445 START TEST spdk_dd_uring 00:07:06.445 ************************************ 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:06.445 * Looking for test storage... 00:07:06.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.445 07:32:31 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:06.446 ************************************ 00:07:06.446 START TEST dd_uring_copy 00:07:06.446 ************************************ 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=qrosyyv6vml47yssse12s1vqqsg6kibmfjntrxtp4sniy0z7npkvgy0z30qtac75wgv18yeqpwsatignveciea1ibthrpk7nd91sgk8b5pi8wvq64ycv482k3c6ry7ox5eaat2qzlb43epnd68zygr03ynoyrmxr415tochm46tamlhs36tmzxioyn5wgskjkf3ftda5zd3ly3fn7gln0eyyduh0odtj2rurscqxt4tmfa4otd2svvazjifrqhzm9janpfixq7lwwg3axpfby3bgu3itx12fp4jtf7q6c4wmxpj7ktpnq5vyyfpygvhngfa3jj2xcbci53sdguz9bq9hntn61sxatvazz11toxxxnhsxfa878hdbm066aat54agyd4nacpryfqzxg1eo2i4kdjkdsssnsmp3xy8bj29kaz14g85msf9uebmyb2chwvteg8yc1jkpvrmcvjnfgtww34axwzapc8oratdqhv8jxuyhpgweylkjqlgrtymzyqj13ihp8whzq5gjtithprhokrfo463azz84yoalimivdcggq335r39qlsztlpvjlvqqonbq5b4hla3m1fy29oqbyzvbngrv6091t76bjce67g8vd068ao75apbgjthhdtnnlb6qmh9vd82dpxtbzfa5cig9lp2nrnzouiz0zgqam7v21r25j85nvg50iay6xatk9053grxzs0ktir5p04f3gm9ctezzidh6mibqsflgjuktcd5dnjjyj3dgibxvss5duzna53re7ihujvhzf5tjytuvbhoaybzgulzsxpmfwolr5j9qm0sp560o7vcvqkkf27aa1vef2tvk7l2cp3svbhs8it80llvm4saiw9b8yz881n8gzxozz4ecx8xh3nf766glfkcum9becdn7ly4wby7tk1nlqoam36pz8oe9l28pyzwgk2x2uw5z5ff3ehjrxzpoli7vo1ikgw2ajcjtfbq03mh0blnq1kmhlz4myn3f0nwp6fghh50mjwni 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo qrosyyv6vml47yssse12s1vqqsg6kibmfjntrxtp4sniy0z7npkvgy0z30qtac75wgv18yeqpwsatignveciea1ibthrpk7nd91sgk8b5pi8wvq64ycv482k3c6ry7ox5eaat2qzlb43epnd68zygr03ynoyrmxr415tochm46tamlhs36tmzxioyn5wgskjkf3ftda5zd3ly3fn7gln0eyyduh0odtj2rurscqxt4tmfa4otd2svvazjifrqhzm9janpfixq7lwwg3axpfby3bgu3itx12fp4jtf7q6c4wmxpj7ktpnq5vyyfpygvhngfa3jj2xcbci53sdguz9bq9hntn61sxatvazz11toxxxnhsxfa878hdbm066aat54agyd4nacpryfqzxg1eo2i4kdjkdsssnsmp3xy8bj29kaz14g85msf9uebmyb2chwvteg8yc1jkpvrmcvjnfgtww34axwzapc8oratdqhv8jxuyhpgweylkjqlgrtymzyqj13ihp8whzq5gjtithprhokrfo463azz84yoalimivdcggq335r39qlsztlpvjlvqqonbq5b4hla3m1fy29oqbyzvbngrv6091t76bjce67g8vd068ao75apbgjthhdtnnlb6qmh9vd82dpxtbzfa5cig9lp2nrnzouiz0zgqam7v21r25j85nvg50iay6xatk9053grxzs0ktir5p04f3gm9ctezzidh6mibqsflgjuktcd5dnjjyj3dgibxvss5duzna53re7ihujvhzf5tjytuvbhoaybzgulzsxpmfwolr5j9qm0sp560o7vcvqkkf27aa1vef2tvk7l2cp3svbhs8it80llvm4saiw9b8yz881n8gzxozz4ecx8xh3nf766glfkcum9becdn7ly4wby7tk1nlqoam36pz8oe9l28pyzwgk2x2uw5z5ff3ehjrxzpoli7vo1ikgw2ajcjtfbq03mh0blnq1kmhlz4myn3f0nwp6fghh50mjwni 00:07:06.446 07:32:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:06.446 [2024-07-26 07:32:32.010202] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:06.446 [2024-07-26 07:32:32.010300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63351 ] 00:07:06.704 [2024-07-26 07:32:32.148978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.704 [2024-07-26 07:32:32.268389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.963 [2024-07-26 07:32:32.350989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.466  Copying: 511/511 [MB] (average 829 MBps) 00:07:08.466 00:07:08.466 07:32:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:08.466 07:32:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:08.466 07:32:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:08.466 07:32:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.466 [2024-07-26 07:32:33.942579] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:08.466 [2024-07-26 07:32:33.942836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63374 ] 00:07:08.466 { 00:07:08.466 "subsystems": [ 00:07:08.466 { 00:07:08.466 "subsystem": "bdev", 00:07:08.466 "config": [ 00:07:08.466 { 00:07:08.466 "params": { 00:07:08.466 "block_size": 512, 00:07:08.467 "num_blocks": 1048576, 00:07:08.467 "name": "malloc0" 00:07:08.467 }, 00:07:08.467 "method": "bdev_malloc_create" 00:07:08.467 }, 00:07:08.467 { 00:07:08.467 "params": { 00:07:08.467 "filename": "/dev/zram1", 00:07:08.467 "name": "uring0" 00:07:08.467 }, 00:07:08.467 "method": "bdev_uring_create" 00:07:08.467 }, 00:07:08.467 { 00:07:08.467 "method": "bdev_wait_for_examine" 00:07:08.467 } 00:07:08.467 ] 00:07:08.467 } 00:07:08.467 ] 00:07:08.467 } 00:07:08.725 [2024-07-26 07:32:34.082404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.725 [2024-07-26 07:32:34.206495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.725 [2024-07-26 07:32:34.287291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.232  Copying: 218/512 [MB] (218 MBps) Copying: 437/512 [MB] (219 MBps) Copying: 512/512 [MB] (average 219 MBps) 00:07:12.232 00:07:12.232 07:32:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:12.232 07:32:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:12.232 07:32:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:12.232 07:32:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.232 [2024-07-26 07:32:37.575220] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:12.232 [2024-07-26 07:32:37.575518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63429 ] 00:07:12.232 { 00:07:12.232 "subsystems": [ 00:07:12.232 { 00:07:12.232 "subsystem": "bdev", 00:07:12.232 "config": [ 00:07:12.232 { 00:07:12.232 "params": { 00:07:12.232 "block_size": 512, 00:07:12.232 "num_blocks": 1048576, 00:07:12.232 "name": "malloc0" 00:07:12.232 }, 00:07:12.232 "method": "bdev_malloc_create" 00:07:12.232 }, 00:07:12.232 { 00:07:12.232 "params": { 00:07:12.232 "filename": "/dev/zram1", 00:07:12.232 "name": "uring0" 00:07:12.232 }, 00:07:12.232 "method": "bdev_uring_create" 00:07:12.232 }, 00:07:12.232 { 00:07:12.232 "method": "bdev_wait_for_examine" 00:07:12.232 } 00:07:12.232 ] 00:07:12.232 } 00:07:12.232 ] 00:07:12.232 } 00:07:12.232 [2024-07-26 07:32:37.714153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.232 [2024-07-26 07:32:37.829459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.491 [2024-07-26 07:32:37.911427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.305  Copying: 187/512 [MB] (187 MBps) Copying: 364/512 [MB] (177 MBps) Copying: 512/512 [MB] (average 177 MBps) 00:07:16.305 00:07:16.305 07:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:16.305 07:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ qrosyyv6vml47yssse12s1vqqsg6kibmfjntrxtp4sniy0z7npkvgy0z30qtac75wgv18yeqpwsatignveciea1ibthrpk7nd91sgk8b5pi8wvq64ycv482k3c6ry7ox5eaat2qzlb43epnd68zygr03ynoyrmxr415tochm46tamlhs36tmzxioyn5wgskjkf3ftda5zd3ly3fn7gln0eyyduh0odtj2rurscqxt4tmfa4otd2svvazjifrqhzm9janpfixq7lwwg3axpfby3bgu3itx12fp4jtf7q6c4wmxpj7ktpnq5vyyfpygvhngfa3jj2xcbci53sdguz9bq9hntn61sxatvazz11toxxxnhsxfa878hdbm066aat54agyd4nacpryfqzxg1eo2i4kdjkdsssnsmp3xy8bj29kaz14g85msf9uebmyb2chwvteg8yc1jkpvrmcvjnfgtww34axwzapc8oratdqhv8jxuyhpgweylkjqlgrtymzyqj13ihp8whzq5gjtithprhokrfo463azz84yoalimivdcggq335r39qlsztlpvjlvqqonbq5b4hla3m1fy29oqbyzvbngrv6091t76bjce67g8vd068ao75apbgjthhdtnnlb6qmh9vd82dpxtbzfa5cig9lp2nrnzouiz0zgqam7v21r25j85nvg50iay6xatk9053grxzs0ktir5p04f3gm9ctezzidh6mibqsflgjuktcd5dnjjyj3dgibxvss5duzna53re7ihujvhzf5tjytuvbhoaybzgulzsxpmfwolr5j9qm0sp560o7vcvqkkf27aa1vef2tvk7l2cp3svbhs8it80llvm4saiw9b8yz881n8gzxozz4ecx8xh3nf766glfkcum9becdn7ly4wby7tk1nlqoam36pz8oe9l28pyzwgk2x2uw5z5ff3ehjrxzpoli7vo1ikgw2ajcjtfbq03mh0blnq1kmhlz4myn3f0nwp6fghh50mjwni == \q\r\o\s\y\y\v\6\v\m\l\4\7\y\s\s\s\e\1\2\s\1\v\q\q\s\g\6\k\i\b\m\f\j\n\t\r\x\t\p\4\s\n\i\y\0\z\7\n\p\k\v\g\y\0\z\3\0\q\t\a\c\7\5\w\g\v\1\8\y\e\q\p\w\s\a\t\i\g\n\v\e\c\i\e\a\1\i\b\t\h\r\p\k\7\n\d\9\1\s\g\k\8\b\5\p\i\8\w\v\q\6\4\y\c\v\4\8\2\k\3\c\6\r\y\7\o\x\5\e\a\a\t\2\q\z\l\b\4\3\e\p\n\d\6\8\z\y\g\r\0\3\y\n\o\y\r\m\x\r\4\1\5\t\o\c\h\m\4\6\t\a\m\l\h\s\3\6\t\m\z\x\i\o\y\n\5\w\g\s\k\j\k\f\3\f\t\d\a\5\z\d\3\l\y\3\f\n\7\g\l\n\0\e\y\y\d\u\h\0\o\d\t\j\2\r\u\r\s\c\q\x\t\4\t\m\f\a\4\o\t\d\2\s\v\v\a\z\j\i\f\r\q\h\z\m\9\j\a\n\p\f\i\x\q\7\l\w\w\g\3\a\x\p\f\b\y\3\b\g\u\3\i\t\x\1\2\f\p\4\j\t\f\7\q\6\c\4\w\m\x\p\j\7\k\t\p\n\q\5\v\y\y\f\p\y\g\v\h\n\g\f\a\3\j\j\2\x\c\b\c\i\5\3\s\d\g\u\z\9\b\q\9\h\n\t\n\6\1\s\x\a\t\v\a\z\z\1\1\t\o\x\x\x\n\h\s\x\f\a\8\7\8\h\d\b\m\0\6\6\a\a\t\5\4\a\g\y\d\4\n\a\c\p\r\y\f\q\z\x\g\1\e\o\2\i\4\k\d\j\k\d\s\s\s\n\s\m\p\3\x\y\8\b\j\2\9\k\a\z\1\4\g\8\5\m\s\f\9\u\e\b\m\y\b\2\c\h\w\v\t\e\g\8\y\c\1\j\k\p\v\r\m\c\v\j\n\f\g\t\w\w\3\4\a\x\w\z\a\p\c\8\o\r\a\t\d\q\h\v\8\j\x\u\y\h\p\g\w\e\y\l\k\j\q\l\g\r\t\y\m\z\y\q\j\1\3\i\h\p\8\w\h\z\q\5\g\j\t\i\t\h\p\r\h\o\k\r\f\o\4\6\3\a\z\z\8\4\y\o\a\l\i\m\i\v\d\c\g\g\q\3\3\5\r\3\9\q\l\s\z\t\l\p\v\j\l\v\q\q\o\n\b\q\5\b\4\h\l\a\3\m\1\f\y\2\9\o\q\b\y\z\v\b\n\g\r\v\6\0\9\1\t\7\6\b\j\c\e\6\7\g\8\v\d\0\6\8\a\o\7\5\a\p\b\g\j\t\h\h\d\t\n\n\l\b\6\q\m\h\9\v\d\8\2\d\p\x\t\b\z\f\a\5\c\i\g\9\l\p\2\n\r\n\z\o\u\i\z\0\z\g\q\a\m\7\v\2\1\r\2\5\j\8\5\n\v\g\5\0\i\a\y\6\x\a\t\k\9\0\5\3\g\r\x\z\s\0\k\t\i\r\5\p\0\4\f\3\g\m\9\c\t\e\z\z\i\d\h\6\m\i\b\q\s\f\l\g\j\u\k\t\c\d\5\d\n\j\j\y\j\3\d\g\i\b\x\v\s\s\5\d\u\z\n\a\5\3\r\e\7\i\h\u\j\v\h\z\f\5\t\j\y\t\u\v\b\h\o\a\y\b\z\g\u\l\z\s\x\p\m\f\w\o\l\r\5\j\9\q\m\0\s\p\5\6\0\o\7\v\c\v\q\k\k\f\2\7\a\a\1\v\e\f\2\t\v\k\7\l\2\c\p\3\s\v\b\h\s\8\i\t\8\0\l\l\v\m\4\s\a\i\w\9\b\8\y\z\8\8\1\n\8\g\z\x\o\z\z\4\e\c\x\8\x\h\3\n\f\7\6\6\g\l\f\k\c\u\m\9\b\e\c\d\n\7\l\y\4\w\b\y\7\t\k\1\n\l\q\o\a\m\3\6\p\z\8\o\e\9\l\2\8\p\y\z\w\g\k\2\x\2\u\w\5\z\5\f\f\3\e\h\j\r\x\z\p\o\l\i\7\v\o\1\i\k\g\w\2\a\j\c\j\t\f\b\q\0\3\m\h\0\b\l\n\q\1\k\m\h\l\z\4\m\y\n\3\f\0\n\w\p\6\f\g\h\h\5\0\m\j\w\n\i ]] 00:07:16.305 07:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:16.305 07:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ qrosyyv6vml47yssse12s1vqqsg6kibmfjntrxtp4sniy0z7npkvgy0z30qtac75wgv18yeqpwsatignveciea1ibthrpk7nd91sgk8b5pi8wvq64ycv482k3c6ry7ox5eaat2qzlb43epnd68zygr03ynoyrmxr415tochm46tamlhs36tmzxioyn5wgskjkf3ftda5zd3ly3fn7gln0eyyduh0odtj2rurscqxt4tmfa4otd2svvazjifrqhzm9janpfixq7lwwg3axpfby3bgu3itx12fp4jtf7q6c4wmxpj7ktpnq5vyyfpygvhngfa3jj2xcbci53sdguz9bq9hntn61sxatvazz11toxxxnhsxfa878hdbm066aat54agyd4nacpryfqzxg1eo2i4kdjkdsssnsmp3xy8bj29kaz14g85msf9uebmyb2chwvteg8yc1jkpvrmcvjnfgtww34axwzapc8oratdqhv8jxuyhpgweylkjqlgrtymzyqj13ihp8whzq5gjtithprhokrfo463azz84yoalimivdcggq335r39qlsztlpvjlvqqonbq5b4hla3m1fy29oqbyzvbngrv6091t76bjce67g8vd068ao75apbgjthhdtnnlb6qmh9vd82dpxtbzfa5cig9lp2nrnzouiz0zgqam7v21r25j85nvg50iay6xatk9053grxzs0ktir5p04f3gm9ctezzidh6mibqsflgjuktcd5dnjjyj3dgibxvss5duzna53re7ihujvhzf5tjytuvbhoaybzgulzsxpmfwolr5j9qm0sp560o7vcvqkkf27aa1vef2tvk7l2cp3svbhs8it80llvm4saiw9b8yz881n8gzxozz4ecx8xh3nf766glfkcum9becdn7ly4wby7tk1nlqoam36pz8oe9l28pyzwgk2x2uw5z5ff3ehjrxzpoli7vo1ikgw2ajcjtfbq03mh0blnq1kmhlz4myn3f0nwp6fghh50mjwni == \q\r\o\s\y\y\v\6\v\m\l\4\7\y\s\s\s\e\1\2\s\1\v\q\q\s\g\6\k\i\b\m\f\j\n\t\r\x\t\p\4\s\n\i\y\0\z\7\n\p\k\v\g\y\0\z\3\0\q\t\a\c\7\5\w\g\v\1\8\y\e\q\p\w\s\a\t\i\g\n\v\e\c\i\e\a\1\i\b\t\h\r\p\k\7\n\d\9\1\s\g\k\8\b\5\p\i\8\w\v\q\6\4\y\c\v\4\8\2\k\3\c\6\r\y\7\o\x\5\e\a\a\t\2\q\z\l\b\4\3\e\p\n\d\6\8\z\y\g\r\0\3\y\n\o\y\r\m\x\r\4\1\5\t\o\c\h\m\4\6\t\a\m\l\h\s\3\6\t\m\z\x\i\o\y\n\5\w\g\s\k\j\k\f\3\f\t\d\a\5\z\d\3\l\y\3\f\n\7\g\l\n\0\e\y\y\d\u\h\0\o\d\t\j\2\r\u\r\s\c\q\x\t\4\t\m\f\a\4\o\t\d\2\s\v\v\a\z\j\i\f\r\q\h\z\m\9\j\a\n\p\f\i\x\q\7\l\w\w\g\3\a\x\p\f\b\y\3\b\g\u\3\i\t\x\1\2\f\p\4\j\t\f\7\q\6\c\4\w\m\x\p\j\7\k\t\p\n\q\5\v\y\y\f\p\y\g\v\h\n\g\f\a\3\j\j\2\x\c\b\c\i\5\3\s\d\g\u\z\9\b\q\9\h\n\t\n\6\1\s\x\a\t\v\a\z\z\1\1\t\o\x\x\x\n\h\s\x\f\a\8\7\8\h\d\b\m\0\6\6\a\a\t\5\4\a\g\y\d\4\n\a\c\p\r\y\f\q\z\x\g\1\e\o\2\i\4\k\d\j\k\d\s\s\s\n\s\m\p\3\x\y\8\b\j\2\9\k\a\z\1\4\g\8\5\m\s\f\9\u\e\b\m\y\b\2\c\h\w\v\t\e\g\8\y\c\1\j\k\p\v\r\m\c\v\j\n\f\g\t\w\w\3\4\a\x\w\z\a\p\c\8\o\r\a\t\d\q\h\v\8\j\x\u\y\h\p\g\w\e\y\l\k\j\q\l\g\r\t\y\m\z\y\q\j\1\3\i\h\p\8\w\h\z\q\5\g\j\t\i\t\h\p\r\h\o\k\r\f\o\4\6\3\a\z\z\8\4\y\o\a\l\i\m\i\v\d\c\g\g\q\3\3\5\r\3\9\q\l\s\z\t\l\p\v\j\l\v\q\q\o\n\b\q\5\b\4\h\l\a\3\m\1\f\y\2\9\o\q\b\y\z\v\b\n\g\r\v\6\0\9\1\t\7\6\b\j\c\e\6\7\g\8\v\d\0\6\8\a\o\7\5\a\p\b\g\j\t\h\h\d\t\n\n\l\b\6\q\m\h\9\v\d\8\2\d\p\x\t\b\z\f\a\5\c\i\g\9\l\p\2\n\r\n\z\o\u\i\z\0\z\g\q\a\m\7\v\2\1\r\2\5\j\8\5\n\v\g\5\0\i\a\y\6\x\a\t\k\9\0\5\3\g\r\x\z\s\0\k\t\i\r\5\p\0\4\f\3\g\m\9\c\t\e\z\z\i\d\h\6\m\i\b\q\s\f\l\g\j\u\k\t\c\d\5\d\n\j\j\y\j\3\d\g\i\b\x\v\s\s\5\d\u\z\n\a\5\3\r\e\7\i\h\u\j\v\h\z\f\5\t\j\y\t\u\v\b\h\o\a\y\b\z\g\u\l\z\s\x\p\m\f\w\o\l\r\5\j\9\q\m\0\s\p\5\6\0\o\7\v\c\v\q\k\k\f\2\7\a\a\1\v\e\f\2\t\v\k\7\l\2\c\p\3\s\v\b\h\s\8\i\t\8\0\l\l\v\m\4\s\a\i\w\9\b\8\y\z\8\8\1\n\8\g\z\x\o\z\z\4\e\c\x\8\x\h\3\n\f\7\6\6\g\l\f\k\c\u\m\9\b\e\c\d\n\7\l\y\4\w\b\y\7\t\k\1\n\l\q\o\a\m\3\6\p\z\8\o\e\9\l\2\8\p\y\z\w\g\k\2\x\2\u\w\5\z\5\f\f\3\e\h\j\r\x\z\p\o\l\i\7\v\o\1\i\k\g\w\2\a\j\c\j\t\f\b\q\0\3\m\h\0\b\l\n\q\1\k\m\h\l\z\4\m\y\n\3\f\0\n\w\p\6\f\g\h\h\5\0\m\j\w\n\i ]] 00:07:16.305 07:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:16.564 07:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:16.564 07:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:16.564 07:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:16.564 07:32:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 [2024-07-26 07:32:42.184688] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:16.824 [2024-07-26 07:32:42.184784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63496 ] 00:07:16.824 { 00:07:16.824 "subsystems": [ 00:07:16.824 { 00:07:16.824 "subsystem": "bdev", 00:07:16.824 "config": [ 00:07:16.824 { 00:07:16.824 "params": { 00:07:16.824 "block_size": 512, 00:07:16.824 "num_blocks": 1048576, 00:07:16.824 "name": "malloc0" 00:07:16.824 }, 00:07:16.824 "method": "bdev_malloc_create" 00:07:16.824 }, 00:07:16.824 { 00:07:16.824 "params": { 00:07:16.824 "filename": "/dev/zram1", 00:07:16.824 "name": "uring0" 00:07:16.824 }, 00:07:16.824 "method": "bdev_uring_create" 00:07:16.824 }, 00:07:16.824 { 00:07:16.824 "method": "bdev_wait_for_examine" 00:07:16.824 } 00:07:16.824 ] 00:07:16.824 } 00:07:16.824 ] 00:07:16.824 } 00:07:16.824 [2024-07-26 07:32:42.320242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.082 [2024-07-26 07:32:42.457507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.082 [2024-07-26 07:32:42.538914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.147  Copying: 141/512 [MB] (141 MBps) Copying: 298/512 [MB] (156 MBps) Copying: 458/512 [MB] (160 MBps) Copying: 512/512 [MB] (average 153 MBps) 00:07:21.147 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:21.147 07:32:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:21.405 [2024-07-26 07:32:46.788524] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:21.405 [2024-07-26 07:32:46.789505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63562 ] 00:07:21.405 { 00:07:21.405 "subsystems": [ 00:07:21.405 { 00:07:21.405 "subsystem": "bdev", 00:07:21.405 "config": [ 00:07:21.405 { 00:07:21.405 "params": { 00:07:21.405 "block_size": 512, 00:07:21.405 "num_blocks": 1048576, 00:07:21.405 "name": "malloc0" 00:07:21.405 }, 00:07:21.405 "method": "bdev_malloc_create" 00:07:21.405 }, 00:07:21.405 { 00:07:21.405 "params": { 00:07:21.405 "filename": "/dev/zram1", 00:07:21.405 "name": "uring0" 00:07:21.405 }, 00:07:21.405 "method": "bdev_uring_create" 00:07:21.405 }, 00:07:21.405 { 00:07:21.405 "params": { 00:07:21.405 "name": "uring0" 00:07:21.405 }, 00:07:21.405 "method": "bdev_uring_delete" 00:07:21.405 }, 00:07:21.405 { 00:07:21.405 "method": "bdev_wait_for_examine" 00:07:21.405 } 00:07:21.405 ] 00:07:21.405 } 00:07:21.405 ] 00:07:21.405 } 00:07:21.664 [2024-07-26 07:32:47.022204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.664 [2024-07-26 07:32:47.134779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.664 [2024-07-26 07:32:47.207451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.489  Copying: 0/0 [B] (average 0 Bps) 00:07:22.489 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.489 07:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:22.747 [2024-07-26 07:32:48.099433] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:22.747 [2024-07-26 07:32:48.099543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63596 ] 00:07:22.747 { 00:07:22.747 "subsystems": [ 00:07:22.747 { 00:07:22.747 "subsystem": "bdev", 00:07:22.747 "config": [ 00:07:22.747 { 00:07:22.747 "params": { 00:07:22.747 "block_size": 512, 00:07:22.747 "num_blocks": 1048576, 00:07:22.747 "name": "malloc0" 00:07:22.747 }, 00:07:22.747 "method": "bdev_malloc_create" 00:07:22.747 }, 00:07:22.747 { 00:07:22.747 "params": { 00:07:22.748 "filename": "/dev/zram1", 00:07:22.748 "name": "uring0" 00:07:22.748 }, 00:07:22.748 "method": "bdev_uring_create" 00:07:22.748 }, 00:07:22.748 { 00:07:22.748 "params": { 00:07:22.748 "name": "uring0" 00:07:22.748 }, 00:07:22.748 "method": "bdev_uring_delete" 00:07:22.748 }, 00:07:22.748 { 00:07:22.748 "method": "bdev_wait_for_examine" 00:07:22.748 } 00:07:22.748 ] 00:07:22.748 } 00:07:22.748 ] 00:07:22.748 } 00:07:22.748 [2024-07-26 07:32:48.237386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.748 [2024-07-26 07:32:48.326896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.005 [2024-07-26 07:32:48.403239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.262 [2024-07-26 07:32:48.668168] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:23.262 [2024-07-26 07:32:48.668228] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:23.262 [2024-07-26 07:32:48.668256] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:23.262 [2024-07-26 07:32:48.668267] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.827 [2024-07-26 07:32:49.142717] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.827 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:23.828 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:23.828 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:23.828 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:23.828 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:23.828 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:24.085 00:07:24.085 ************************************ 00:07:24.085 END TEST dd_uring_copy 00:07:24.085 ************************************ 00:07:24.085 real 0m17.582s 00:07:24.085 user 0m11.876s 00:07:24.085 sys 0m13.872s 00:07:24.085 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.085 07:32:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.086 ************************************ 00:07:24.086 END TEST spdk_dd_uring 00:07:24.086 ************************************ 00:07:24.086 00:07:24.086 real 0m17.719s 00:07:24.086 user 0m11.937s 00:07:24.086 sys 0m13.950s 00:07:24.086 07:32:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.086 07:32:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:24.086 07:32:49 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:24.086 07:32:49 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.086 07:32:49 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.086 07:32:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:24.086 ************************************ 00:07:24.086 START TEST spdk_dd_sparse 00:07:24.086 ************************************ 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:24.086 * Looking for test storage... 00:07:24.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:24.086 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:24.344 1+0 records in 00:07:24.344 1+0 records out 00:07:24.344 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00735476 s, 570 MB/s 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:24.344 1+0 records in 00:07:24.344 1+0 records out 00:07:24.344 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00692401 s, 606 MB/s 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:24.344 1+0 records in 00:07:24.344 1+0 records out 00:07:24.344 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00601337 s, 697 MB/s 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:24.344 ************************************ 00:07:24.344 START TEST dd_sparse_file_to_file 00:07:24.344 ************************************ 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:24.344 07:32:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:24.344 [2024-07-26 07:32:49.776760] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:24.344 [2024-07-26 07:32:49.777030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63688 ] 00:07:24.344 { 00:07:24.344 "subsystems": [ 00:07:24.344 { 00:07:24.344 "subsystem": "bdev", 00:07:24.344 "config": [ 00:07:24.344 { 00:07:24.344 "params": { 00:07:24.344 "block_size": 4096, 00:07:24.344 "filename": "dd_sparse_aio_disk", 00:07:24.344 "name": "dd_aio" 00:07:24.345 }, 00:07:24.345 "method": "bdev_aio_create" 00:07:24.345 }, 00:07:24.345 { 00:07:24.345 "params": { 00:07:24.345 "lvs_name": "dd_lvstore", 00:07:24.345 "bdev_name": "dd_aio" 00:07:24.345 }, 00:07:24.345 "method": "bdev_lvol_create_lvstore" 00:07:24.345 }, 00:07:24.345 { 00:07:24.345 "method": "bdev_wait_for_examine" 00:07:24.345 } 00:07:24.345 ] 00:07:24.345 } 00:07:24.345 ] 00:07:24.345 } 00:07:24.345 [2024-07-26 07:32:49.916934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.603 [2024-07-26 07:32:50.043176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.603 [2024-07-26 07:32:50.124877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.120  Copying: 12/36 [MB] (average 857 MBps) 00:07:25.120 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:25.120 ************************************ 00:07:25.120 END TEST dd_sparse_file_to_file 00:07:25.120 ************************************ 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:25.120 00:07:25.120 real 0m0.879s 00:07:25.120 user 0m0.557s 00:07:25.120 sys 0m0.479s 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:25.120 ************************************ 00:07:25.120 START TEST dd_sparse_file_to_bdev 00:07:25.120 ************************************ 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:25.120 07:32:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:25.120 [2024-07-26 07:32:50.710664] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:25.120 [2024-07-26 07:32:50.710769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63736 ] 00:07:25.120 { 00:07:25.120 "subsystems": [ 00:07:25.120 { 00:07:25.120 "subsystem": "bdev", 00:07:25.120 "config": [ 00:07:25.120 { 00:07:25.120 "params": { 00:07:25.120 "block_size": 4096, 00:07:25.120 "filename": "dd_sparse_aio_disk", 00:07:25.120 "name": "dd_aio" 00:07:25.120 }, 00:07:25.120 "method": "bdev_aio_create" 00:07:25.120 }, 00:07:25.120 { 00:07:25.120 "params": { 00:07:25.120 "lvs_name": "dd_lvstore", 00:07:25.120 "lvol_name": "dd_lvol", 00:07:25.120 "size_in_mib": 36, 00:07:25.120 "thin_provision": true 00:07:25.120 }, 00:07:25.120 "method": "bdev_lvol_create" 00:07:25.120 }, 00:07:25.120 { 00:07:25.120 "method": "bdev_wait_for_examine" 00:07:25.120 } 00:07:25.120 ] 00:07:25.120 } 00:07:25.120 ] 00:07:25.120 } 00:07:25.379 [2024-07-26 07:32:50.851962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.379 [2024-07-26 07:32:50.971259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.637 [2024-07-26 07:32:51.045227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.895  Copying: 12/36 [MB] (average 480 MBps) 00:07:25.895 00:07:25.895 00:07:25.895 real 0m0.801s 00:07:25.895 user 0m0.520s 00:07:25.895 sys 0m0.436s 00:07:25.895 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.895 ************************************ 00:07:25.895 END TEST dd_sparse_file_to_bdev 00:07:25.895 ************************************ 00:07:25.895 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:25.895 07:32:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:25.895 07:32:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.895 07:32:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.895 07:32:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:26.154 ************************************ 00:07:26.154 START TEST dd_sparse_bdev_to_file 00:07:26.154 ************************************ 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:26.154 07:32:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:26.154 [2024-07-26 07:32:51.562351] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:26.154 [2024-07-26 07:32:51.562442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63768 ] 00:07:26.154 { 00:07:26.154 "subsystems": [ 00:07:26.154 { 00:07:26.154 "subsystem": "bdev", 00:07:26.154 "config": [ 00:07:26.154 { 00:07:26.154 "params": { 00:07:26.154 "block_size": 4096, 00:07:26.154 "filename": "dd_sparse_aio_disk", 00:07:26.154 "name": "dd_aio" 00:07:26.154 }, 00:07:26.154 "method": "bdev_aio_create" 00:07:26.154 }, 00:07:26.154 { 00:07:26.154 "method": "bdev_wait_for_examine" 00:07:26.154 } 00:07:26.154 ] 00:07:26.154 } 00:07:26.154 ] 00:07:26.154 } 00:07:26.154 [2024-07-26 07:32:51.700707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.413 [2024-07-26 07:32:51.792838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.413 [2024-07-26 07:32:51.867587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.980  Copying: 12/36 [MB] (average 857 MBps) 00:07:26.980 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:26.980 ************************************ 00:07:26.980 END TEST dd_sparse_bdev_to_file 00:07:26.980 ************************************ 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:26.980 00:07:26.980 real 0m0.837s 00:07:26.980 user 0m0.540s 00:07:26.980 sys 0m0.458s 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:26.980 00:07:26.980 real 0m2.808s 00:07:26.980 user 0m1.709s 00:07:26.980 sys 0m1.559s 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.980 07:32:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:26.980 ************************************ 00:07:26.980 END TEST spdk_dd_sparse 00:07:26.980 ************************************ 00:07:26.980 07:32:52 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:26.980 07:32:52 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.980 07:32:52 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.980 07:32:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:26.980 ************************************ 00:07:26.980 START TEST spdk_dd_negative 00:07:26.980 ************************************ 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:26.980 * Looking for test storage... 00:07:26.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.980 ************************************ 00:07:26.980 START TEST dd_invalid_arguments 00:07:26.980 ************************************ 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.980 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.981 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.981 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.981 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.981 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.981 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.981 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:27.238 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:27.238 00:07:27.238 CPU options: 00:07:27.239 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:27.239 (like [0,1,10]) 00:07:27.239 --lcores lcore to CPU mapping list. The list is in the format: 00:07:27.239 [<,lcores[@CPUs]>...] 00:07:27.239 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:27.239 Within the group, '-' is used for range separator, 00:07:27.239 ',' is used for single number separator. 00:07:27.239 '( )' can be omitted for single element group, 00:07:27.239 '@' can be omitted if cpus and lcores have the same value 00:07:27.239 --disable-cpumask-locks Disable CPU core lock files. 00:07:27.239 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:27.239 pollers in the app support interrupt mode) 00:07:27.239 -p, --main-core main (primary) core for DPDK 00:07:27.239 00:07:27.239 Configuration options: 00:07:27.239 -c, --config, --json JSON config file 00:07:27.239 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:27.239 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:27.239 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:27.239 --rpcs-allowed comma-separated list of permitted RPCS 00:07:27.239 --json-ignore-init-errors don't exit on invalid config entry 00:07:27.239 00:07:27.239 Memory options: 00:07:27.239 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:27.239 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:27.239 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:27.239 -R, --huge-unlink unlink huge files after initialization 00:07:27.239 -n, --mem-channels number of memory channels used for DPDK 00:07:27.239 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:27.239 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:27.239 --no-huge run without using hugepages 00:07:27.239 -i, --shm-id shared memory ID (optional) 00:07:27.239 -g, --single-file-segments force creating just one hugetlbfs file 00:07:27.239 00:07:27.239 PCI options: 00:07:27.239 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:27.239 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:27.239 -u, --no-pci disable PCI access 00:07:27.239 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:27.239 00:07:27.239 Log options: 00:07:27.239 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:27.239 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:27.239 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:27.239 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:27.239 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:27.239 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:27.239 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:27.239 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:27.239 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:27.239 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:27.239 virtio_vfio_user, vmd) 00:07:27.239 --silence-noticelog disable notice level logging to stderr 00:07:27.239 00:07:27.239 Trace options: 00:07:27.239 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:27.239 setting 0 to disable trace (default 32768) 00:07:27.239 Tracepoints vary in size and can use more than one trace entry. 00:07:27.239 -e, --tpoint-group [:] 00:07:27.239 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:27.239 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:27.239 [2024-07-26 07:32:52.609734] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:27.239 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:27.239 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:27.239 a tracepoint group. First tpoint inside a group can be enabled by 00:07:27.239 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:27.239 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:27.239 in /include/spdk_internal/trace_defs.h 00:07:27.239 00:07:27.239 Other options: 00:07:27.239 -h, --help show this usage 00:07:27.239 -v, --version print SPDK version 00:07:27.239 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:27.239 --env-context Opaque context for use of the env implementation 00:07:27.239 00:07:27.239 Application specific: 00:07:27.239 [--------- DD Options ---------] 00:07:27.239 --if Input file. Must specify either --if or --ib. 00:07:27.239 --ib Input bdev. Must specifier either --if or --ib 00:07:27.239 --of Output file. Must specify either --of or --ob. 00:07:27.239 --ob Output bdev. Must specify either --of or --ob. 00:07:27.239 --iflag Input file flags. 00:07:27.239 --oflag Output file flags. 00:07:27.239 --bs I/O unit size (default: 4096) 00:07:27.239 --qd Queue depth (default: 2) 00:07:27.239 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:27.239 --skip Skip this many I/O units at start of input. (default: 0) 00:07:27.239 --seek Skip this many I/O units at start of output. (default: 0) 00:07:27.239 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:27.239 --sparse Enable hole skipping in input target 00:07:27.239 Available iflag and oflag values: 00:07:27.239 append - append mode 00:07:27.239 direct - use direct I/O for data 00:07:27.239 directory - fail unless a directory 00:07:27.239 dsync - use synchronized I/O for data 00:07:27.239 noatime - do not update access time 00:07:27.239 noctty - do not assign controlling terminal from file 00:07:27.239 nofollow - do not follow symlinks 00:07:27.239 nonblock - use non-blocking I/O 00:07:27.239 sync - use synchronized I/O for data and metadata 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.239 00:07:27.239 real 0m0.067s 00:07:27.239 user 0m0.040s 00:07:27.239 sys 0m0.025s 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:27.239 ************************************ 00:07:27.239 END TEST dd_invalid_arguments 00:07:27.239 ************************************ 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.239 ************************************ 00:07:27.239 START TEST dd_double_input 00:07:27.239 ************************************ 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:27.239 [2024-07-26 07:32:52.730701] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.239 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.239 00:07:27.239 real 0m0.078s 00:07:27.240 user 0m0.048s 00:07:27.240 sys 0m0.027s 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:27.240 ************************************ 00:07:27.240 END TEST dd_double_input 00:07:27.240 ************************************ 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.240 ************************************ 00:07:27.240 START TEST dd_double_output 00:07:27.240 ************************************ 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.240 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:27.498 [2024-07-26 07:32:52.857680] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.498 ************************************ 00:07:27.498 END TEST dd_double_output 00:07:27.498 ************************************ 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.498 00:07:27.498 real 0m0.077s 00:07:27.498 user 0m0.052s 00:07:27.498 sys 0m0.024s 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.498 ************************************ 00:07:27.498 START TEST dd_no_input 00:07:27.498 ************************************ 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:27.498 [2024-07-26 07:32:52.982985] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:27.498 ************************************ 00:07:27.498 END TEST dd_no_input 00:07:27.498 ************************************ 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.498 07:32:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.498 00:07:27.498 real 0m0.071s 00:07:27.498 user 0m0.047s 00:07:27.498 sys 0m0.022s 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.498 ************************************ 00:07:27.498 START TEST dd_no_output 00:07:27.498 ************************************ 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.498 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.499 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.499 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.499 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.499 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.757 [2024-07-26 07:32:53.101389] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.757 00:07:27.757 real 0m0.063s 00:07:27.757 user 0m0.041s 00:07:27.757 sys 0m0.021s 00:07:27.757 ************************************ 00:07:27.757 END TEST dd_no_output 00:07:27.757 ************************************ 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 ************************************ 00:07:27.757 START TEST dd_wrong_blocksize 00:07:27.757 ************************************ 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:27.757 [2024-07-26 07:32:53.219558] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.757 00:07:27.757 real 0m0.070s 00:07:27.757 user 0m0.043s 00:07:27.757 sys 0m0.024s 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 ************************************ 00:07:27.757 END TEST dd_wrong_blocksize 00:07:27.757 ************************************ 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.757 ************************************ 00:07:27.757 START TEST dd_smaller_blocksize 00:07:27.757 ************************************ 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.757 07:32:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:27.757 [2024-07-26 07:32:53.341601] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:27.757 [2024-07-26 07:32:53.341685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63992 ] 00:07:28.016 [2024-07-26 07:32:53.478760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.016 [2024-07-26 07:32:53.569537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.274 [2024-07-26 07:32:53.642367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.544 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:28.544 [2024-07-26 07:32:53.989873] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:28.544 [2024-07-26 07:32:53.989941] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.806 [2024-07-26 07:32:54.159477] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:28.806 ************************************ 00:07:28.806 END TEST dd_smaller_blocksize 00:07:28.806 ************************************ 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.806 00:07:28.806 real 0m0.986s 00:07:28.806 user 0m0.440s 00:07:28.806 sys 0m0.438s 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.806 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.806 ************************************ 00:07:28.806 START TEST dd_invalid_count 00:07:28.806 ************************************ 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:28.807 [2024-07-26 07:32:54.384604] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.807 00:07:28.807 real 0m0.076s 00:07:28.807 user 0m0.052s 00:07:28.807 sys 0m0.023s 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.807 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:28.807 ************************************ 00:07:28.807 END TEST dd_invalid_count 00:07:28.807 ************************************ 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 ************************************ 00:07:29.064 START TEST dd_invalid_oflag 00:07:29.064 ************************************ 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:29.064 [2024-07-26 07:32:54.511034] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.064 00:07:29.064 real 0m0.073s 00:07:29.064 user 0m0.047s 00:07:29.064 sys 0m0.025s 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 ************************************ 00:07:29.064 END TEST dd_invalid_oflag 00:07:29.064 ************************************ 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 ************************************ 00:07:29.064 START TEST dd_invalid_iflag 00:07:29.064 ************************************ 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:29.064 [2024-07-26 07:32:54.637024] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.064 00:07:29.064 real 0m0.074s 00:07:29.064 user 0m0.046s 00:07:29.064 sys 0m0.027s 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.064 07:32:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 ************************************ 00:07:29.064 END TEST dd_invalid_iflag 00:07:29.064 ************************************ 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.322 ************************************ 00:07:29.322 START TEST dd_unknown_flag 00:07:29.322 ************************************ 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.322 07:32:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:29.322 [2024-07-26 07:32:54.763803] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:29.322 [2024-07-26 07:32:54.763903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64084 ] 00:07:29.322 [2024-07-26 07:32:54.903007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.580 [2024-07-26 07:32:55.000687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.580 [2024-07-26 07:32:55.074643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.580 [2024-07-26 07:32:55.117071] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:29.580 [2024-07-26 07:32:55.117206] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.580 [2024-07-26 07:32:55.117288] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:29.580 [2024-07-26 07:32:55.117302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.580 [2024-07-26 07:32:55.117581] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:29.580 [2024-07-26 07:32:55.117606] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.580 [2024-07-26 07:32:55.117661] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:29.580 [2024-07-26 07:32:55.117672] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:29.837 [2024-07-26 07:32:55.278265] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.837 00:07:29.837 real 0m0.692s 00:07:29.837 user 0m0.400s 00:07:29.837 sys 0m0.193s 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.837 07:32:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:29.837 ************************************ 00:07:29.837 END TEST dd_unknown_flag 00:07:29.837 ************************************ 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.095 ************************************ 00:07:30.095 START TEST dd_invalid_json 00:07:30.095 ************************************ 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.095 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:30.095 [2024-07-26 07:32:55.504934] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:30.095 [2024-07-26 07:32:55.505037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64118 ] 00:07:30.095 [2024-07-26 07:32:55.643728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.353 [2024-07-26 07:32:55.764230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.353 [2024-07-26 07:32:55.764330] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:30.353 [2024-07-26 07:32:55.764345] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.353 [2024-07-26 07:32:55.764354] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.353 [2024-07-26 07:32:55.764393] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.353 00:07:30.353 real 0m0.424s 00:07:30.353 user 0m0.237s 00:07:30.353 sys 0m0.085s 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.353 ************************************ 00:07:30.353 END TEST dd_invalid_json 00:07:30.353 ************************************ 00:07:30.353 00:07:30.353 real 0m3.460s 00:07:30.353 user 0m1.706s 00:07:30.353 sys 0m1.380s 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.353 07:32:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.353 ************************************ 00:07:30.353 END TEST spdk_dd_negative 00:07:30.353 ************************************ 00:07:30.611 00:07:30.611 real 1m32.138s 00:07:30.611 user 1m0.421s 00:07:30.611 sys 0m40.861s 00:07:30.611 07:32:55 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.611 ************************************ 00:07:30.611 07:32:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:30.611 END TEST spdk_dd 00:07:30.611 ************************************ 00:07:30.611 07:32:55 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:30.611 07:32:55 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:30.611 07:32:55 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:30.611 07:32:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.611 07:32:55 -- common/autotest_common.sh@10 -- # set +x 00:07:30.611 07:32:56 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:30.611 07:32:56 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:30.611 07:32:56 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:30.611 07:32:56 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:30.611 07:32:56 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:30.611 07:32:56 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:30.611 07:32:56 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.611 07:32:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.611 07:32:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.611 07:32:56 -- common/autotest_common.sh@10 -- # set +x 00:07:30.611 ************************************ 00:07:30.612 START TEST nvmf_tcp 00:07:30.612 ************************************ 00:07:30.612 07:32:56 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.612 * Looking for test storage... 00:07:30.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:30.612 07:32:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.612 07:32:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.612 07:32:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.612 07:32:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.612 07:32:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.612 07:32:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 ************************************ 00:07:30.612 START TEST nvmf_target_core 00:07:30.612 ************************************ 00:07:30.612 07:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.871 * Looking for test storage... 00:07:30.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.871 07:32:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.872 ************************************ 00:07:30.872 START TEST nvmf_host_management 00:07:30.872 ************************************ 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.872 * Looking for test storage... 00:07:30.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:30.872 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:30.873 Cannot find device "nvmf_init_br" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:30.873 Cannot find device "nvmf_tgt_br" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.873 Cannot find device "nvmf_tgt_br2" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:30.873 Cannot find device "nvmf_init_br" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:30.873 Cannot find device "nvmf_tgt_br" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:30.873 Cannot find device "nvmf_tgt_br2" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:30.873 Cannot find device "nvmf_br" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:30.873 Cannot find device "nvmf_init_if" 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:30.873 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.132 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:31.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:07:31.390 00:07:31.390 --- 10.0.0.2 ping statistics --- 00:07:31.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.390 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:31.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:07:31.390 00:07:31.390 --- 10.0.0.3 ping statistics --- 00:07:31.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.390 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:31.390 00:07:31.390 --- 10.0.0.1 ping statistics --- 00:07:31.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.390 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:31.390 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64405 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64405 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64405 ']' 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.391 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.391 [2024-07-26 07:32:56.839883] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:31.391 [2024-07-26 07:32:56.839981] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.391 [2024-07-26 07:32:56.984064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.650 [2024-07-26 07:32:57.128077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.650 [2024-07-26 07:32:57.128155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.650 [2024-07-26 07:32:57.128179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.650 [2024-07-26 07:32:57.128190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.650 [2024-07-26 07:32:57.128199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.650 [2024-07-26 07:32:57.128335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.650 [2024-07-26 07:32:57.128536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.650 [2024-07-26 07:32:57.128624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.650 [2024-07-26 07:32:57.128628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.650 [2024-07-26 07:32:57.206326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.301 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.301 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:32.301 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.301 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.301 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.559 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.559 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.559 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.559 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.559 [2024-07-26 07:32:57.916055] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.560 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 Malloc0 00:07:32.560 [2024-07-26 07:32:57.994780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64459 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64459 /var/tmp/bdevperf.sock 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64459 ']' 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:32.560 { 00:07:32.560 "params": { 00:07:32.560 "name": "Nvme$subsystem", 00:07:32.560 "trtype": "$TEST_TRANSPORT", 00:07:32.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.560 "adrfam": "ipv4", 00:07:32.560 "trsvcid": "$NVMF_PORT", 00:07:32.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.560 "hdgst": ${hdgst:-false}, 00:07:32.560 "ddgst": ${ddgst:-false} 00:07:32.560 }, 00:07:32.560 "method": "bdev_nvme_attach_controller" 00:07:32.560 } 00:07:32.560 EOF 00:07:32.560 )") 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:32.560 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:32.560 "params": { 00:07:32.560 "name": "Nvme0", 00:07:32.560 "trtype": "tcp", 00:07:32.560 "traddr": "10.0.0.2", 00:07:32.560 "adrfam": "ipv4", 00:07:32.560 "trsvcid": "4420", 00:07:32.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.560 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:32.560 "hdgst": false, 00:07:32.560 "ddgst": false 00:07:32.560 }, 00:07:32.560 "method": "bdev_nvme_attach_controller" 00:07:32.560 }' 00:07:32.560 [2024-07-26 07:32:58.106991] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:32.560 [2024-07-26 07:32:58.107087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64459 ] 00:07:32.818 [2024-07-26 07:32:58.249257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.819 [2024-07-26 07:32:58.382821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.076 [2024-07-26 07:32:58.472080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.076 Running I/O for 10 seconds... 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:33.641 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.642 [2024-07-26 07:32:59.237339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.642 [2024-07-26 07:32:59.237395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.642 [2024-07-26 07:32:59.237410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.642 [2024-07-26 07:32:59.237420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.642 [2024-07-26 07:32:59.237431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.642 [2024-07-26 07:32:59.237440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.642 [2024-07-26 07:32:59.237452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.642 [2024-07-26 07:32:59.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.642 [2024-07-26 07:32:59.237483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211d50 is same with the state(5) to be set 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.642 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.902 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.902 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:33.902 [2024-07-26 07:32:59.259068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.902 [2024-07-26 07:32:59.259832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.902 [2024-07-26 07:32:59.259849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.259981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.259993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-26 07:32:59.260497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.903 [2024-07-26 07:32:59.260508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219ec0 is same with the state(5) to be set 00:07:33.903 [2024-07-26 07:32:59.260595] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2219ec0 was disconnected and freed. reset controller. 00:07:33.903 [2024-07-26 07:32:59.260678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2211d50 (9): Bad file descriptor 00:07:33.903 [2024-07-26 07:32:59.261782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:33.903 task offset: 122880 on job bdev=Nvme0n1 fails 00:07:33.903 00:07:33.903 Latency(us) 00:07:33.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.903 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:33.903 Job: Nvme0n1 ended in about 0.66 seconds with error 00:07:33.903 Verification LBA range: start 0x0 length 0x400 00:07:33.903 Nvme0n1 : 0.66 1451.68 90.73 96.78 0.00 40288.99 1966.08 40989.79 00:07:33.903 =================================================================================================================== 00:07:33.903 Total : 1451.68 90.73 96.78 0.00 40288.99 1966.08 40989.79 00:07:33.903 [2024-07-26 07:32:59.263772] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.903 [2024-07-26 07:32:59.267375] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64459 00:07:34.838 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64459) - No such process 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:34.838 { 00:07:34.838 "params": { 00:07:34.838 "name": "Nvme$subsystem", 00:07:34.838 "trtype": "$TEST_TRANSPORT", 00:07:34.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.838 "adrfam": "ipv4", 00:07:34.838 "trsvcid": "$NVMF_PORT", 00:07:34.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.838 "hdgst": ${hdgst:-false}, 00:07:34.838 "ddgst": ${ddgst:-false} 00:07:34.838 }, 00:07:34.838 "method": "bdev_nvme_attach_controller" 00:07:34.838 } 00:07:34.838 EOF 00:07:34.838 )") 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:34.838 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:34.838 "params": { 00:07:34.838 "name": "Nvme0", 00:07:34.838 "trtype": "tcp", 00:07:34.838 "traddr": "10.0.0.2", 00:07:34.838 "adrfam": "ipv4", 00:07:34.838 "trsvcid": "4420", 00:07:34.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.838 "hdgst": false, 00:07:34.838 "ddgst": false 00:07:34.838 }, 00:07:34.838 "method": "bdev_nvme_attach_controller" 00:07:34.838 }' 00:07:34.838 [2024-07-26 07:33:00.310063] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:34.838 [2024-07-26 07:33:00.310153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64497 ] 00:07:35.096 [2024-07-26 07:33:00.454539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.096 [2024-07-26 07:33:00.556585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.096 [2024-07-26 07:33:00.642677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.355 Running I/O for 1 seconds... 00:07:36.289 00:07:36.290 Latency(us) 00:07:36.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.290 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:36.290 Verification LBA range: start 0x0 length 0x400 00:07:36.290 Nvme0n1 : 1.01 1516.53 94.78 0.00 0.00 41389.06 4289.63 38368.35 00:07:36.290 =================================================================================================================== 00:07:36.290 Total : 1516.53 94.78 0.00 0.00 41389.06 4289.63 38368.35 00:07:36.547 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:36.547 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:36.547 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:36.547 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:36.547 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:36.548 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.548 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.806 rmmod nvme_tcp 00:07:36.806 rmmod nvme_fabrics 00:07:36.806 rmmod nvme_keyring 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64405 ']' 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64405 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64405 ']' 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64405 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64405 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:36.806 killing process with pid 64405 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64405' 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64405 00:07:36.806 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64405 00:07:37.065 [2024-07-26 07:33:02.538184] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:37.065 00:07:37.065 real 0m6.342s 00:07:37.065 user 0m24.626s 00:07:37.065 sys 0m1.622s 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.065 ************************************ 00:07:37.065 END TEST nvmf_host_management 00:07:37.065 ************************************ 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.065 07:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.324 ************************************ 00:07:37.324 START TEST nvmf_lvol 00:07:37.324 ************************************ 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:37.324 * Looking for test storage... 00:07:37.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.324 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:37.325 Cannot find device "nvmf_tgt_br" 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:37.325 Cannot find device "nvmf_tgt_br2" 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:37.325 Cannot find device "nvmf_tgt_br" 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:37.325 Cannot find device "nvmf_tgt_br2" 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:37.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:37.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:37.325 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:37.584 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:37.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:07:37.584 00:07:37.584 --- 10.0.0.2 ping statistics --- 00:07:37.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.584 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:37.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:37.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:07:37.584 00:07:37.584 --- 10.0.0.3 ping statistics --- 00:07:37.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.584 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:37.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:37.584 00:07:37.584 --- 10.0.0.1 ping statistics --- 00:07:37.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.584 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64718 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64718 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64718 ']' 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.584 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.843 [2024-07-26 07:33:03.186906] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:37.843 [2024-07-26 07:33:03.187006] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.843 [2024-07-26 07:33:03.327939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.100 [2024-07-26 07:33:03.453502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.100 [2024-07-26 07:33:03.453570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.100 [2024-07-26 07:33:03.453594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.100 [2024-07-26 07:33:03.453615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.100 [2024-07-26 07:33:03.453625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.100 [2024-07-26 07:33:03.453733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.100 [2024-07-26 07:33:03.454599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.100 [2024-07-26 07:33:03.454621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.100 [2024-07-26 07:33:03.531341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.666 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.924 [2024-07-26 07:33:04.475428] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.924 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.491 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:39.491 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.491 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:39.491 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:40.057 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:40.057 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c6d840ed-2432-44dc-bc37-4e10d9c8a26f 00:07:40.057 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c6d840ed-2432-44dc-bc37-4e10d9c8a26f lvol 20 00:07:40.315 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6fe383e7-ffd8-4d72-900c-2edf166932de 00:07:40.315 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:40.574 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fe383e7-ffd8-4d72-900c-2edf166932de 00:07:40.832 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.091 [2024-07-26 07:33:06.553086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.091 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.349 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64788 00:07:41.350 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:41.350 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:42.285 07:33:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6fe383e7-ffd8-4d72-900c-2edf166932de MY_SNAPSHOT 00:07:42.544 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ceae8871-51ee-44a4-9f3b-a945ee6a4009 00:07:42.544 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6fe383e7-ffd8-4d72-900c-2edf166932de 30 00:07:42.803 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ceae8871-51ee-44a4-9f3b-a945ee6a4009 MY_CLONE 00:07:43.060 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d13d963a-29e3-4789-b7c5-08f951ce2ae1 00:07:43.060 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d13d963a-29e3-4789-b7c5-08f951ce2ae1 00:07:43.647 07:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64788 00:07:51.759 Initializing NVMe Controllers 00:07:51.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.759 Controller IO queue size 128, less than required. 00:07:51.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:51.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:51.759 Initialization complete. Launching workers. 00:07:51.759 ======================================================== 00:07:51.759 Latency(us) 00:07:51.759 Device Information : IOPS MiB/s Average min max 00:07:51.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10769.20 42.07 11893.64 1953.49 78558.00 00:07:51.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10621.20 41.49 12053.61 4039.72 79367.61 00:07:51.759 ======================================================== 00:07:51.759 Total : 21390.40 83.56 11973.07 1953.49 79367.61 00:07:51.759 00:07:51.759 07:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.017 07:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6fe383e7-ffd8-4d72-900c-2edf166932de 00:07:52.274 07:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6d840ed-2432-44dc-bc37-4e10d9c8a26f 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.533 rmmod nvme_tcp 00:07:52.533 rmmod nvme_fabrics 00:07:52.533 rmmod nvme_keyring 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64718 ']' 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64718 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64718 ']' 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64718 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64718 00:07:52.533 killing process with pid 64718 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64718' 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64718 00:07:52.533 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64718 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.099 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:53.100 ************************************ 00:07:53.100 END TEST nvmf_lvol 00:07:53.100 ************************************ 00:07:53.100 00:07:53.100 real 0m15.886s 00:07:53.100 user 1m5.709s 00:07:53.100 sys 0m4.164s 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.100 ************************************ 00:07:53.100 START TEST nvmf_lvs_grow 00:07:53.100 ************************************ 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:53.100 * Looking for test storage... 00:07:53.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.100 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.358 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.359 Cannot find device "nvmf_tgt_br" 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.359 Cannot find device "nvmf_tgt_br2" 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.359 Cannot find device "nvmf_tgt_br" 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.359 Cannot find device "nvmf_tgt_br2" 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.359 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.617 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.617 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.617 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.617 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:53.617 00:07:53.617 --- 10.0.0.2 ping statistics --- 00:07:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.617 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:53.617 00:07:53.617 --- 10.0.0.3 ping statistics --- 00:07:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.617 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:53.617 00:07:53.617 --- 10.0.0.1 ping statistics --- 00:07:53.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.617 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65118 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65118 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 65118 ']' 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.617 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.617 [2024-07-26 07:33:19.111371] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:53.617 [2024-07-26 07:33:19.111504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.876 [2024-07-26 07:33:19.250420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.876 [2024-07-26 07:33:19.367555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.876 [2024-07-26 07:33:19.367610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.876 [2024-07-26 07:33:19.367637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.876 [2024-07-26 07:33:19.367646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.876 [2024-07-26 07:33:19.367653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.876 [2024-07-26 07:33:19.367681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.876 [2024-07-26 07:33:19.446060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.810 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.810 [2024-07-26 07:33:20.400446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.069 ************************************ 00:07:55.069 START TEST lvs_grow_clean 00:07:55.069 ************************************ 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.069 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.328 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.328 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.586 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4e9f59c-2954-4848-94c0-c5ff23581021 00:07:55.586 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:55.586 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:07:55.844 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:55.844 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:55.844 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a4e9f59c-2954-4848-94c0-c5ff23581021 lvol 150 00:07:56.102 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6d2ba110-a9c3-4c0d-a981-c117572ef40b 00:07:56.102 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.102 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.361 [2024-07-26 07:33:21.716516] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.361 [2024-07-26 07:33:21.716622] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.361 true 00:07:56.361 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.361 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:07:56.619 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.619 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.878 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d2ba110-a9c3-4c0d-a981-c117572ef40b 00:07:57.136 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.136 [2024-07-26 07:33:22.713211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.136 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65201 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65201 /var/tmp/bdevperf.sock 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 65201 ']' 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.394 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.653 [2024-07-26 07:33:23.017732] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:07:57.653 [2024-07-26 07:33:23.018210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65201 ] 00:07:57.653 [2024-07-26 07:33:23.153962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.910 [2024-07-26 07:33:23.300288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.910 [2024-07-26 07:33:23.379318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.475 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.475 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:58.475 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.741 Nvme0n1 00:07:58.741 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:59.029 [ 00:07:59.029 { 00:07:59.029 "name": "Nvme0n1", 00:07:59.029 "aliases": [ 00:07:59.029 "6d2ba110-a9c3-4c0d-a981-c117572ef40b" 00:07:59.029 ], 00:07:59.029 "product_name": "NVMe disk", 00:07:59.029 "block_size": 4096, 00:07:59.029 "num_blocks": 38912, 00:07:59.029 "uuid": "6d2ba110-a9c3-4c0d-a981-c117572ef40b", 00:07:59.029 "assigned_rate_limits": { 00:07:59.029 "rw_ios_per_sec": 0, 00:07:59.029 "rw_mbytes_per_sec": 0, 00:07:59.029 "r_mbytes_per_sec": 0, 00:07:59.029 "w_mbytes_per_sec": 0 00:07:59.029 }, 00:07:59.029 "claimed": false, 00:07:59.029 "zoned": false, 00:07:59.029 "supported_io_types": { 00:07:59.029 "read": true, 00:07:59.029 "write": true, 00:07:59.029 "unmap": true, 00:07:59.029 "flush": true, 00:07:59.029 "reset": true, 00:07:59.029 "nvme_admin": true, 00:07:59.029 "nvme_io": true, 00:07:59.029 "nvme_io_md": false, 00:07:59.029 "write_zeroes": true, 00:07:59.029 "zcopy": false, 00:07:59.029 "get_zone_info": false, 00:07:59.029 "zone_management": false, 00:07:59.029 "zone_append": false, 00:07:59.029 "compare": true, 00:07:59.029 "compare_and_write": true, 00:07:59.029 "abort": true, 00:07:59.029 "seek_hole": false, 00:07:59.029 "seek_data": false, 00:07:59.029 "copy": true, 00:07:59.029 "nvme_iov_md": false 00:07:59.029 }, 00:07:59.029 "memory_domains": [ 00:07:59.029 { 00:07:59.029 "dma_device_id": "system", 00:07:59.029 "dma_device_type": 1 00:07:59.029 } 00:07:59.029 ], 00:07:59.029 "driver_specific": { 00:07:59.029 "nvme": [ 00:07:59.029 { 00:07:59.029 "trid": { 00:07:59.029 "trtype": "TCP", 00:07:59.029 "adrfam": "IPv4", 00:07:59.029 "traddr": "10.0.0.2", 00:07:59.029 "trsvcid": "4420", 00:07:59.029 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:59.029 }, 00:07:59.029 "ctrlr_data": { 00:07:59.029 "cntlid": 1, 00:07:59.029 "vendor_id": "0x8086", 00:07:59.029 "model_number": "SPDK bdev Controller", 00:07:59.029 "serial_number": "SPDK0", 00:07:59.029 "firmware_revision": "24.09", 00:07:59.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.029 "oacs": { 00:07:59.029 "security": 0, 00:07:59.029 "format": 0, 00:07:59.029 "firmware": 0, 00:07:59.029 "ns_manage": 0 00:07:59.029 }, 00:07:59.029 "multi_ctrlr": true, 00:07:59.029 "ana_reporting": false 00:07:59.029 }, 00:07:59.029 "vs": { 00:07:59.029 "nvme_version": "1.3" 00:07:59.029 }, 00:07:59.029 "ns_data": { 00:07:59.029 "id": 1, 00:07:59.029 "can_share": true 00:07:59.029 } 00:07:59.029 } 00:07:59.029 ], 00:07:59.029 "mp_policy": "active_passive" 00:07:59.029 } 00:07:59.029 } 00:07:59.029 ] 00:07:59.029 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.029 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65230 00:07:59.029 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:59.287 Running I/O for 10 seconds... 00:08:00.221 Latency(us) 00:08:00.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.221 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:00.221 =================================================================================================================== 00:08:00.221 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:00.221 00:08:01.156 07:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:01.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.156 Nvme0n1 : 2.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:01.156 =================================================================================================================== 00:08:01.156 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:01.156 00:08:01.414 true 00:08:01.414 07:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:01.414 07:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:01.673 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:01.673 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:01.673 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65230 00:08:02.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.237 Nvme0n1 : 3.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:02.237 =================================================================================================================== 00:08:02.237 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:02.237 00:08:03.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.171 Nvme0n1 : 4.00 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:08:03.171 =================================================================================================================== 00:08:03.171 Total : 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:08:03.171 00:08:04.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.106 Nvme0n1 : 5.00 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:04.106 =================================================================================================================== 00:08:04.106 Total : 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:04.106 00:08:05.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.480 Nvme0n1 : 6.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:05.480 =================================================================================================================== 00:08:05.480 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:05.480 00:08:06.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.415 Nvme0n1 : 7.00 7438.57 29.06 0.00 0.00 0.00 0.00 0.00 00:08:06.415 =================================================================================================================== 00:08:06.415 Total : 7438.57 29.06 0.00 0.00 0.00 0.00 0.00 00:08:06.415 00:08:07.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.351 Nvme0n1 : 8.00 7413.62 28.96 0.00 0.00 0.00 0.00 0.00 00:08:07.351 =================================================================================================================== 00:08:07.351 Total : 7413.62 28.96 0.00 0.00 0.00 0.00 0.00 00:08:07.351 00:08:08.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.288 Nvme0n1 : 9.00 7394.22 28.88 0.00 0.00 0.00 0.00 0.00 00:08:08.288 =================================================================================================================== 00:08:08.288 Total : 7394.22 28.88 0.00 0.00 0.00 0.00 0.00 00:08:08.288 00:08:09.222 00:08:09.222 Latency(us) 00:08:09.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.222 Nvme0n1 : 10.00 7377.75 28.82 0.00 0.00 17343.48 14417.92 42181.35 00:08:09.222 =================================================================================================================== 00:08:09.222 Total : 7377.75 28.82 0.00 0.00 17343.48 14417.92 42181.35 00:08:09.222 0 00:08:09.222 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65201 00:08:09.222 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 65201 ']' 00:08:09.222 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 65201 00:08:09.222 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:09.222 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.222 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65201 00:08:09.223 killing process with pid 65201 00:08:09.223 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.223 00:08:09.223 Latency(us) 00:08:09.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.223 =================================================================================================================== 00:08:09.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.223 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:09.223 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:09.223 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65201' 00:08:09.223 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 65201 00:08:09.223 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 65201 00:08:09.481 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.738 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.304 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:10.304 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:10.304 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:10.304 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:10.304 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.563 [2024-07-26 07:33:36.082199] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:10.563 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:10.821 request: 00:08:10.821 { 00:08:10.821 "uuid": "a4e9f59c-2954-4848-94c0-c5ff23581021", 00:08:10.821 "method": "bdev_lvol_get_lvstores", 00:08:10.821 "req_id": 1 00:08:10.821 } 00:08:10.821 Got JSON-RPC error response 00:08:10.821 response: 00:08:10.821 { 00:08:10.821 "code": -19, 00:08:10.821 "message": "No such device" 00:08:10.821 } 00:08:10.821 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:10.821 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.821 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.821 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.821 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.079 aio_bdev 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6d2ba110-a9c3-4c0d-a981-c117572ef40b 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6d2ba110-a9c3-4c0d-a981-c117572ef40b 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.079 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.337 07:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d2ba110-a9c3-4c0d-a981-c117572ef40b -t 2000 00:08:11.596 [ 00:08:11.596 { 00:08:11.596 "name": "6d2ba110-a9c3-4c0d-a981-c117572ef40b", 00:08:11.596 "aliases": [ 00:08:11.596 "lvs/lvol" 00:08:11.596 ], 00:08:11.596 "product_name": "Logical Volume", 00:08:11.596 "block_size": 4096, 00:08:11.596 "num_blocks": 38912, 00:08:11.596 "uuid": "6d2ba110-a9c3-4c0d-a981-c117572ef40b", 00:08:11.596 "assigned_rate_limits": { 00:08:11.596 "rw_ios_per_sec": 0, 00:08:11.596 "rw_mbytes_per_sec": 0, 00:08:11.596 "r_mbytes_per_sec": 0, 00:08:11.596 "w_mbytes_per_sec": 0 00:08:11.596 }, 00:08:11.596 "claimed": false, 00:08:11.596 "zoned": false, 00:08:11.596 "supported_io_types": { 00:08:11.596 "read": true, 00:08:11.596 "write": true, 00:08:11.596 "unmap": true, 00:08:11.596 "flush": false, 00:08:11.596 "reset": true, 00:08:11.596 "nvme_admin": false, 00:08:11.596 "nvme_io": false, 00:08:11.596 "nvme_io_md": false, 00:08:11.596 "write_zeroes": true, 00:08:11.596 "zcopy": false, 00:08:11.596 "get_zone_info": false, 00:08:11.596 "zone_management": false, 00:08:11.596 "zone_append": false, 00:08:11.596 "compare": false, 00:08:11.596 "compare_and_write": false, 00:08:11.596 "abort": false, 00:08:11.596 "seek_hole": true, 00:08:11.596 "seek_data": true, 00:08:11.596 "copy": false, 00:08:11.596 "nvme_iov_md": false 00:08:11.596 }, 00:08:11.596 "driver_specific": { 00:08:11.596 "lvol": { 00:08:11.596 "lvol_store_uuid": "a4e9f59c-2954-4848-94c0-c5ff23581021", 00:08:11.596 "base_bdev": "aio_bdev", 00:08:11.596 "thin_provision": false, 00:08:11.596 "num_allocated_clusters": 38, 00:08:11.596 "snapshot": false, 00:08:11.596 "clone": false, 00:08:11.596 "esnap_clone": false 00:08:11.596 } 00:08:11.596 } 00:08:11.596 } 00:08:11.596 ] 00:08:11.596 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:11.596 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:11.596 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:11.855 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:11.855 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:11.855 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.116 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.116 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6d2ba110-a9c3-4c0d-a981-c117572ef40b 00:08:12.381 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4e9f59c-2954-4848-94c0-c5ff23581021 00:08:12.643 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.901 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.160 ************************************ 00:08:13.160 END TEST lvs_grow_clean 00:08:13.160 ************************************ 00:08:13.160 00:08:13.160 real 0m18.267s 00:08:13.160 user 0m17.051s 00:08:13.160 sys 0m2.668s 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.160 ************************************ 00:08:13.160 START TEST lvs_grow_dirty 00:08:13.160 ************************************ 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.160 07:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.417 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:13.417 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:13.983 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2d75d879-1523-41f8-816e-3e5597ae4187 00:08:13.983 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:13.983 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:13.983 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:13.983 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:13.983 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d75d879-1523-41f8-816e-3e5597ae4187 lvol 150 00:08:14.241 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=448ad99f-6946-47ef-84ce-75db77962a52 00:08:14.241 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.241 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:14.499 [2024-07-26 07:33:39.986341] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:14.499 [2024-07-26 07:33:39.986440] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:14.499 true 00:08:14.499 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:14.499 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:14.757 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:14.757 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.015 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 448ad99f-6946-47ef-84ce-75db77962a52 00:08:15.272 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.530 [2024-07-26 07:33:40.991082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.530 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65470 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65470 /var/tmp/bdevperf.sock 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65470 ']' 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.788 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.788 [2024-07-26 07:33:41.288818] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:15.788 [2024-07-26 07:33:41.289070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65470 ] 00:08:16.046 [2024-07-26 07:33:41.428149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.046 [2024-07-26 07:33:41.559193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.046 [2024-07-26 07:33:41.635576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.980 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.980 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:16.980 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.980 Nvme0n1 00:08:16.980 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:17.238 [ 00:08:17.238 { 00:08:17.238 "name": "Nvme0n1", 00:08:17.238 "aliases": [ 00:08:17.238 "448ad99f-6946-47ef-84ce-75db77962a52" 00:08:17.238 ], 00:08:17.238 "product_name": "NVMe disk", 00:08:17.238 "block_size": 4096, 00:08:17.238 "num_blocks": 38912, 00:08:17.238 "uuid": "448ad99f-6946-47ef-84ce-75db77962a52", 00:08:17.238 "assigned_rate_limits": { 00:08:17.238 "rw_ios_per_sec": 0, 00:08:17.238 "rw_mbytes_per_sec": 0, 00:08:17.238 "r_mbytes_per_sec": 0, 00:08:17.238 "w_mbytes_per_sec": 0 00:08:17.238 }, 00:08:17.238 "claimed": false, 00:08:17.238 "zoned": false, 00:08:17.238 "supported_io_types": { 00:08:17.238 "read": true, 00:08:17.238 "write": true, 00:08:17.238 "unmap": true, 00:08:17.238 "flush": true, 00:08:17.238 "reset": true, 00:08:17.238 "nvme_admin": true, 00:08:17.238 "nvme_io": true, 00:08:17.238 "nvme_io_md": false, 00:08:17.238 "write_zeroes": true, 00:08:17.238 "zcopy": false, 00:08:17.238 "get_zone_info": false, 00:08:17.238 "zone_management": false, 00:08:17.238 "zone_append": false, 00:08:17.238 "compare": true, 00:08:17.238 "compare_and_write": true, 00:08:17.238 "abort": true, 00:08:17.238 "seek_hole": false, 00:08:17.238 "seek_data": false, 00:08:17.238 "copy": true, 00:08:17.238 "nvme_iov_md": false 00:08:17.238 }, 00:08:17.238 "memory_domains": [ 00:08:17.238 { 00:08:17.238 "dma_device_id": "system", 00:08:17.238 "dma_device_type": 1 00:08:17.238 } 00:08:17.238 ], 00:08:17.238 "driver_specific": { 00:08:17.238 "nvme": [ 00:08:17.238 { 00:08:17.238 "trid": { 00:08:17.238 "trtype": "TCP", 00:08:17.238 "adrfam": "IPv4", 00:08:17.238 "traddr": "10.0.0.2", 00:08:17.238 "trsvcid": "4420", 00:08:17.238 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:17.238 }, 00:08:17.238 "ctrlr_data": { 00:08:17.238 "cntlid": 1, 00:08:17.238 "vendor_id": "0x8086", 00:08:17.238 "model_number": "SPDK bdev Controller", 00:08:17.238 "serial_number": "SPDK0", 00:08:17.238 "firmware_revision": "24.09", 00:08:17.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:17.238 "oacs": { 00:08:17.238 "security": 0, 00:08:17.238 "format": 0, 00:08:17.238 "firmware": 0, 00:08:17.238 "ns_manage": 0 00:08:17.238 }, 00:08:17.238 "multi_ctrlr": true, 00:08:17.238 "ana_reporting": false 00:08:17.238 }, 00:08:17.238 "vs": { 00:08:17.238 "nvme_version": "1.3" 00:08:17.238 }, 00:08:17.238 "ns_data": { 00:08:17.238 "id": 1, 00:08:17.238 "can_share": true 00:08:17.238 } 00:08:17.238 } 00:08:17.238 ], 00:08:17.238 "mp_policy": "active_passive" 00:08:17.238 } 00:08:17.238 } 00:08:17.238 ] 00:08:17.238 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65499 00:08:17.238 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.238 07:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:17.496 Running I/O for 10 seconds... 00:08:18.429 Latency(us) 00:08:18.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.429 Nvme0n1 : 1.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:18.429 =================================================================================================================== 00:08:18.429 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:18.429 00:08:19.363 07:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:19.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.363 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:19.363 =================================================================================================================== 00:08:19.363 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:19.363 00:08:19.622 true 00:08:19.622 07:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:19.622 07:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.880 07:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.880 07:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.880 07:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65499 00:08:20.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.473 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:20.473 =================================================================================================================== 00:08:20.473 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:20.473 00:08:21.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.410 Nvme0n1 : 4.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:21.410 =================================================================================================================== 00:08:21.410 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:21.410 00:08:22.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.345 Nvme0n1 : 5.00 7569.20 29.57 0.00 0.00 0.00 0.00 0.00 00:08:22.345 =================================================================================================================== 00:08:22.345 Total : 7569.20 29.57 0.00 0.00 0.00 0.00 0.00 00:08:22.345 00:08:23.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.722 Nvme0n1 : 6.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:23.722 =================================================================================================================== 00:08:23.722 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:23.722 00:08:24.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.658 Nvme0n1 : 7.00 7400.57 28.91 0.00 0.00 0.00 0.00 0.00 00:08:24.658 =================================================================================================================== 00:08:24.658 Total : 7400.57 28.91 0.00 0.00 0.00 0.00 0.00 00:08:24.658 00:08:25.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.593 Nvme0n1 : 8.00 7380.38 28.83 0.00 0.00 0.00 0.00 0.00 00:08:25.593 =================================================================================================================== 00:08:25.593 Total : 7380.38 28.83 0.00 0.00 0.00 0.00 0.00 00:08:25.593 00:08:26.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.530 Nvme0n1 : 9.00 7364.67 28.77 0.00 0.00 0.00 0.00 0.00 00:08:26.530 =================================================================================================================== 00:08:26.530 Total : 7364.67 28.77 0.00 0.00 0.00 0.00 0.00 00:08:26.530 00:08:27.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.467 Nvme0n1 : 10.00 7339.40 28.67 0.00 0.00 0.00 0.00 0.00 00:08:27.467 =================================================================================================================== 00:08:27.467 Total : 7339.40 28.67 0.00 0.00 0.00 0.00 0.00 00:08:27.467 00:08:27.467 00:08:27.467 Latency(us) 00:08:27.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.468 Nvme0n1 : 10.01 7344.78 28.69 0.00 0.00 17421.71 10307.03 145847.39 00:08:27.468 =================================================================================================================== 00:08:27.468 Total : 7344.78 28.69 0.00 0.00 17421.71 10307.03 145847.39 00:08:27.468 0 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65470 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65470 ']' 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65470 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65470 00:08:27.468 killing process with pid 65470 00:08:27.468 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.468 00:08:27.468 Latency(us) 00:08:27.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.468 =================================================================================================================== 00:08:27.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65470' 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65470 00:08:27.468 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65470 00:08:27.727 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.984 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.243 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:28.243 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65118 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65118 00:08:28.502 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65118 Killed "${NVMF_APP[@]}" "$@" 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65633 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65633 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65633 ']' 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.502 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.761 [2024-07-26 07:33:54.111335] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:28.761 [2024-07-26 07:33:54.111623] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.761 [2024-07-26 07:33:54.247822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.020 [2024-07-26 07:33:54.370033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.020 [2024-07-26 07:33:54.370348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.020 [2024-07-26 07:33:54.370535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.020 [2024-07-26 07:33:54.370676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.020 [2024-07-26 07:33:54.370710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.020 [2024-07-26 07:33:54.370827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.020 [2024-07-26 07:33:54.442691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.621 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.893 [2024-07-26 07:33:55.312052] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:29.893 [2024-07-26 07:33:55.312496] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:29.893 [2024-07-26 07:33:55.312827] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 448ad99f-6946-47ef-84ce-75db77962a52 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=448ad99f-6946-47ef-84ce-75db77962a52 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.893 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.152 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 448ad99f-6946-47ef-84ce-75db77962a52 -t 2000 00:08:30.410 [ 00:08:30.411 { 00:08:30.411 "name": "448ad99f-6946-47ef-84ce-75db77962a52", 00:08:30.411 "aliases": [ 00:08:30.411 "lvs/lvol" 00:08:30.411 ], 00:08:30.411 "product_name": "Logical Volume", 00:08:30.411 "block_size": 4096, 00:08:30.411 "num_blocks": 38912, 00:08:30.411 "uuid": "448ad99f-6946-47ef-84ce-75db77962a52", 00:08:30.411 "assigned_rate_limits": { 00:08:30.411 "rw_ios_per_sec": 0, 00:08:30.411 "rw_mbytes_per_sec": 0, 00:08:30.411 "r_mbytes_per_sec": 0, 00:08:30.411 "w_mbytes_per_sec": 0 00:08:30.411 }, 00:08:30.411 "claimed": false, 00:08:30.411 "zoned": false, 00:08:30.411 "supported_io_types": { 00:08:30.411 "read": true, 00:08:30.411 "write": true, 00:08:30.411 "unmap": true, 00:08:30.411 "flush": false, 00:08:30.411 "reset": true, 00:08:30.411 "nvme_admin": false, 00:08:30.411 "nvme_io": false, 00:08:30.411 "nvme_io_md": false, 00:08:30.411 "write_zeroes": true, 00:08:30.411 "zcopy": false, 00:08:30.411 "get_zone_info": false, 00:08:30.411 "zone_management": false, 00:08:30.411 "zone_append": false, 00:08:30.411 "compare": false, 00:08:30.411 "compare_and_write": false, 00:08:30.411 "abort": false, 00:08:30.411 "seek_hole": true, 00:08:30.411 "seek_data": true, 00:08:30.411 "copy": false, 00:08:30.411 "nvme_iov_md": false 00:08:30.411 }, 00:08:30.411 "driver_specific": { 00:08:30.411 "lvol": { 00:08:30.411 "lvol_store_uuid": "2d75d879-1523-41f8-816e-3e5597ae4187", 00:08:30.411 "base_bdev": "aio_bdev", 00:08:30.411 "thin_provision": false, 00:08:30.411 "num_allocated_clusters": 38, 00:08:30.411 "snapshot": false, 00:08:30.411 "clone": false, 00:08:30.411 "esnap_clone": false 00:08:30.411 } 00:08:30.411 } 00:08:30.411 } 00:08:30.411 ] 00:08:30.411 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:30.411 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:30.411 07:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:30.669 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:30.669 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:30.669 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:30.669 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:30.669 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.928 [2024-07-26 07:33:56.461265] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:30.928 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:31.186 request: 00:08:31.186 { 00:08:31.186 "uuid": "2d75d879-1523-41f8-816e-3e5597ae4187", 00:08:31.186 "method": "bdev_lvol_get_lvstores", 00:08:31.186 "req_id": 1 00:08:31.186 } 00:08:31.186 Got JSON-RPC error response 00:08:31.186 response: 00:08:31.186 { 00:08:31.186 "code": -19, 00:08:31.186 "message": "No such device" 00:08:31.186 } 00:08:31.186 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:31.186 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.186 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:31.186 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.186 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.444 aio_bdev 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 448ad99f-6946-47ef-84ce-75db77962a52 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=448ad99f-6946-47ef-84ce-75db77962a52 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.444 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.702 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 448ad99f-6946-47ef-84ce-75db77962a52 -t 2000 00:08:31.960 [ 00:08:31.960 { 00:08:31.960 "name": "448ad99f-6946-47ef-84ce-75db77962a52", 00:08:31.960 "aliases": [ 00:08:31.960 "lvs/lvol" 00:08:31.960 ], 00:08:31.960 "product_name": "Logical Volume", 00:08:31.960 "block_size": 4096, 00:08:31.960 "num_blocks": 38912, 00:08:31.960 "uuid": "448ad99f-6946-47ef-84ce-75db77962a52", 00:08:31.960 "assigned_rate_limits": { 00:08:31.960 "rw_ios_per_sec": 0, 00:08:31.960 "rw_mbytes_per_sec": 0, 00:08:31.961 "r_mbytes_per_sec": 0, 00:08:31.961 "w_mbytes_per_sec": 0 00:08:31.961 }, 00:08:31.961 "claimed": false, 00:08:31.961 "zoned": false, 00:08:31.961 "supported_io_types": { 00:08:31.961 "read": true, 00:08:31.961 "write": true, 00:08:31.961 "unmap": true, 00:08:31.961 "flush": false, 00:08:31.961 "reset": true, 00:08:31.961 "nvme_admin": false, 00:08:31.961 "nvme_io": false, 00:08:31.961 "nvme_io_md": false, 00:08:31.961 "write_zeroes": true, 00:08:31.961 "zcopy": false, 00:08:31.961 "get_zone_info": false, 00:08:31.961 "zone_management": false, 00:08:31.961 "zone_append": false, 00:08:31.961 "compare": false, 00:08:31.961 "compare_and_write": false, 00:08:31.961 "abort": false, 00:08:31.961 "seek_hole": true, 00:08:31.961 "seek_data": true, 00:08:31.961 "copy": false, 00:08:31.961 "nvme_iov_md": false 00:08:31.961 }, 00:08:31.961 "driver_specific": { 00:08:31.961 "lvol": { 00:08:31.961 "lvol_store_uuid": "2d75d879-1523-41f8-816e-3e5597ae4187", 00:08:31.961 "base_bdev": "aio_bdev", 00:08:31.961 "thin_provision": false, 00:08:31.961 "num_allocated_clusters": 38, 00:08:31.961 "snapshot": false, 00:08:31.961 "clone": false, 00:08:31.961 "esnap_clone": false 00:08:31.961 } 00:08:31.961 } 00:08:31.961 } 00:08:31.961 ] 00:08:31.961 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:31.961 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:31.961 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:32.219 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:32.219 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:32.219 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:32.478 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:32.478 07:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 448ad99f-6946-47ef-84ce-75db77962a52 00:08:32.737 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2d75d879-1523-41f8-816e-3e5597ae4187 00:08:32.995 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.995 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.561 ************************************ 00:08:33.561 END TEST lvs_grow_dirty 00:08:33.562 ************************************ 00:08:33.562 00:08:33.562 real 0m20.206s 00:08:33.562 user 0m42.793s 00:08:33.562 sys 0m8.305s 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:33.562 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:33.562 nvmf_trace.0 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.562 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.821 rmmod nvme_tcp 00:08:33.821 rmmod nvme_fabrics 00:08:33.821 rmmod nvme_keyring 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65633 ']' 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65633 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65633 ']' 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65633 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65633 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65633' 00:08:33.821 killing process with pid 65633 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65633 00:08:33.821 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65633 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:34.080 ************************************ 00:08:34.080 END TEST nvmf_lvs_grow 00:08:34.080 ************************************ 00:08:34.080 00:08:34.080 real 0m41.075s 00:08:34.080 user 1m5.969s 00:08:34.080 sys 0m11.744s 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.080 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.339 ************************************ 00:08:34.339 START TEST nvmf_bdev_io_wait 00:08:34.339 ************************************ 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:34.339 * Looking for test storage... 00:08:34.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:34.339 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:34.340 Cannot find device "nvmf_tgt_br" 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.340 Cannot find device "nvmf_tgt_br2" 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:34.340 Cannot find device "nvmf_tgt_br" 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:34.340 Cannot find device "nvmf_tgt_br2" 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:34.340 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.599 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.599 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:34.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:08:34.599 00:08:34.599 --- 10.0.0.2 ping statistics --- 00:08:34.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.600 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:34.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:34.600 00:08:34.600 --- 10.0.0.3 ping statistics --- 00:08:34.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.600 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:34.600 00:08:34.600 --- 10.0.0.1 ping statistics --- 00:08:34.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.600 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65941 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65941 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65941 ']' 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.600 07:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.859 [2024-07-26 07:34:00.246103] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:34.859 [2024-07-26 07:34:00.246209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.859 [2024-07-26 07:34:00.387980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.117 [2024-07-26 07:34:00.508336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.117 [2024-07-26 07:34:00.508726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.117 [2024-07-26 07:34:00.508863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.118 [2024-07-26 07:34:00.508995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.118 [2024-07-26 07:34:00.509031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.118 [2024-07-26 07:34:00.509287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.118 [2024-07-26 07:34:00.509513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.118 [2024-07-26 07:34:00.509393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.118 [2024-07-26 07:34:00.509514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.684 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 [2024-07-26 07:34:01.352150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 [2024-07-26 07:34:01.369887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 Malloc0 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 [2024-07-26 07:34:01.439922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65976 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65978 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:35.943 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:35.943 { 00:08:35.943 "params": { 00:08:35.943 "name": "Nvme$subsystem", 00:08:35.943 "trtype": "$TEST_TRANSPORT", 00:08:35.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.943 "adrfam": "ipv4", 00:08:35.943 "trsvcid": "$NVMF_PORT", 00:08:35.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.943 "hdgst": ${hdgst:-false}, 00:08:35.944 "ddgst": ${ddgst:-false} 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 } 00:08:35.944 EOF 00:08:35.944 )") 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65980 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:35.944 { 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme$subsystem", 00:08:35.944 "trtype": "$TEST_TRANSPORT", 00:08:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "$NVMF_PORT", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.944 "hdgst": ${hdgst:-false}, 00:08:35.944 "ddgst": ${ddgst:-false} 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 } 00:08:35.944 EOF 00:08:35.944 )") 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65983 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:35.944 { 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme$subsystem", 00:08:35.944 "trtype": "$TEST_TRANSPORT", 00:08:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "$NVMF_PORT", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.944 "hdgst": ${hdgst:-false}, 00:08:35.944 "ddgst": ${ddgst:-false} 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 } 00:08:35.944 EOF 00:08:35.944 )") 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:35.944 { 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme$subsystem", 00:08:35.944 "trtype": "$TEST_TRANSPORT", 00:08:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "$NVMF_PORT", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.944 "hdgst": ${hdgst:-false}, 00:08:35.944 "ddgst": ${ddgst:-false} 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 } 00:08:35.944 EOF 00:08:35.944 )") 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme1", 00:08:35.944 "trtype": "tcp", 00:08:35.944 "traddr": "10.0.0.2", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "4420", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.944 "hdgst": false, 00:08:35.944 "ddgst": false 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 }' 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme1", 00:08:35.944 "trtype": "tcp", 00:08:35.944 "traddr": "10.0.0.2", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "4420", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.944 "hdgst": false, 00:08:35.944 "ddgst": false 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 }' 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme1", 00:08:35.944 "trtype": "tcp", 00:08:35.944 "traddr": "10.0.0.2", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "4420", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.944 "hdgst": false, 00:08:35.944 "ddgst": false 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 }' 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:35.944 "params": { 00:08:35.944 "name": "Nvme1", 00:08:35.944 "trtype": "tcp", 00:08:35.944 "traddr": "10.0.0.2", 00:08:35.944 "adrfam": "ipv4", 00:08:35.944 "trsvcid": "4420", 00:08:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.944 "hdgst": false, 00:08:35.944 "ddgst": false 00:08:35.944 }, 00:08:35.944 "method": "bdev_nvme_attach_controller" 00:08:35.944 }' 00:08:35.944 [2024-07-26 07:34:01.496396] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:35.944 [2024-07-26 07:34:01.496497] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:35.944 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65976 00:08:35.944 [2024-07-26 07:34:01.505797] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:35.944 [2024-07-26 07:34:01.505867] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:35.944 [2024-07-26 07:34:01.543305] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:35.944 [2024-07-26 07:34:01.543972] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:36.203 [2024-07-26 07:34:01.547579] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:36.203 [2024-07-26 07:34:01.547647] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:36.203 [2024-07-26 07:34:01.728734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.461 [2024-07-26 07:34:01.832311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.461 [2024-07-26 07:34:01.857501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:36.461 [2024-07-26 07:34:01.929769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.461 [2024-07-26 07:34:01.951837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:36.462 [2024-07-26 07:34:01.981137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.462 [2024-07-26 07:34:02.029539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.462 [2024-07-26 07:34:02.041158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.462 [2024-07-26 07:34:02.057009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:36.720 Running I/O for 1 seconds... 00:08:36.720 [2024-07-26 07:34:02.118042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.720 Running I/O for 1 seconds... 00:08:36.720 [2024-07-26 07:34:02.158249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:36.720 [2024-07-26 07:34:02.219098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.720 Running I/O for 1 seconds... 00:08:36.720 Running I/O for 1 seconds... 00:08:37.655 00:08:37.655 Latency(us) 00:08:37.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.655 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:37.655 Nvme1n1 : 1.00 164086.82 640.96 0.00 0.00 777.27 370.50 1191.56 00:08:37.655 =================================================================================================================== 00:08:37.655 Total : 164086.82 640.96 0.00 0.00 777.27 370.50 1191.56 00:08:37.655 00:08:37.655 Latency(us) 00:08:37.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.655 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:37.655 Nvme1n1 : 1.02 6545.90 25.57 0.00 0.00 19260.85 5689.72 40989.79 00:08:37.655 =================================================================================================================== 00:08:37.655 Total : 6545.90 25.57 0.00 0.00 19260.85 5689.72 40989.79 00:08:37.655 00:08:37.655 Latency(us) 00:08:37.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.655 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:37.655 Nvme1n1 : 1.01 7525.08 29.39 0.00 0.00 16906.24 9234.62 26571.87 00:08:37.655 =================================================================================================================== 00:08:37.655 Total : 7525.08 29.39 0.00 0.00 16906.24 9234.62 26571.87 00:08:37.914 00:08:37.914 Latency(us) 00:08:37.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.914 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:37.914 Nvme1n1 : 1.01 6747.73 26.36 0.00 0.00 18905.73 5779.08 48854.11 00:08:37.914 =================================================================================================================== 00:08:37.914 Total : 6747.73 26.36 0.00 0.00 18905.73 5779.08 48854.11 00:08:37.914 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65978 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65980 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65983 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.174 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.432 rmmod nvme_tcp 00:08:38.432 rmmod nvme_fabrics 00:08:38.432 rmmod nvme_keyring 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65941 ']' 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65941 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65941 ']' 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65941 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65941 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.432 killing process with pid 65941 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65941' 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65941 00:08:38.432 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65941 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:38.690 00:08:38.690 real 0m4.435s 00:08:38.690 user 0m19.846s 00:08:38.690 sys 0m2.334s 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.690 ************************************ 00:08:38.690 END TEST nvmf_bdev_io_wait 00:08:38.690 ************************************ 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.690 07:34:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 ************************************ 00:08:38.691 START TEST nvmf_queue_depth 00:08:38.691 ************************************ 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:38.691 * Looking for test storage... 00:08:38.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.691 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:38.950 Cannot find device "nvmf_tgt_br" 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.950 Cannot find device "nvmf_tgt_br2" 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:38.950 Cannot find device "nvmf_tgt_br" 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:38.950 Cannot find device "nvmf_tgt_br2" 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.950 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:38.951 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:39.209 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:39.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:39.210 00:08:39.210 --- 10.0.0.2 ping statistics --- 00:08:39.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.210 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:39.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:39.210 00:08:39.210 --- 10.0.0.3 ping statistics --- 00:08:39.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.210 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:39.210 00:08:39.210 --- 10.0.0.1 ping statistics --- 00:08:39.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.210 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66219 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66219 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66219 ']' 00:08:39.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.210 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.210 [2024-07-26 07:34:04.732102] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:39.210 [2024-07-26 07:34:04.732215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.468 [2024-07-26 07:34:04.873175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.468 [2024-07-26 07:34:04.991699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.468 [2024-07-26 07:34:04.991773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.468 [2024-07-26 07:34:04.991785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.468 [2024-07-26 07:34:04.991794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.469 [2024-07-26 07:34:04.991801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.469 [2024-07-26 07:34:04.991838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.469 [2024-07-26 07:34:05.068761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.048 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.048 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:40.048 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.048 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.048 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 [2024-07-26 07:34:05.680278] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 Malloc0 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 [2024-07-26 07:34:05.750869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66251 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66251 /var/tmp/bdevperf.sock 00:08:40.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66251 ']' 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.306 07:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 [2024-07-26 07:34:05.811357] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:40.306 [2024-07-26 07:34:05.811622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66251 ] 00:08:40.564 [2024-07-26 07:34:05.950277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.564 [2024-07-26 07:34:06.104362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.821 [2024-07-26 07:34:06.184500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.385 NVMe0n1 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.385 07:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:41.385 Running I/O for 10 seconds... 00:08:53.584 00:08:53.584 Latency(us) 00:08:53.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.584 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:53.584 Verification LBA range: start 0x0 length 0x4000 00:08:53.584 NVMe0n1 : 10.07 8390.42 32.78 0.00 0.00 121440.78 20494.89 90558.84 00:08:53.584 =================================================================================================================== 00:08:53.584 Total : 8390.42 32.78 0.00 0.00 121440.78 20494.89 90558.84 00:08:53.584 0 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66251 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66251 ']' 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66251 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66251 00:08:53.584 killing process with pid 66251 00:08:53.584 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.584 00:08:53.584 Latency(us) 00:08:53.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.584 =================================================================================================================== 00:08:53.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66251' 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66251 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66251 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.584 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.585 rmmod nvme_tcp 00:08:53.585 rmmod nvme_fabrics 00:08:53.585 rmmod nvme_keyring 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66219 ']' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66219 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66219 ']' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66219 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66219 00:08:53.585 killing process with pid 66219 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66219' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66219 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66219 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.585 00:08:53.585 real 0m13.684s 00:08:53.585 user 0m23.439s 00:08:53.585 sys 0m2.338s 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.585 ************************************ 00:08:53.585 END TEST nvmf_queue_depth 00:08:53.585 ************************************ 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.585 ************************************ 00:08:53.585 START TEST nvmf_target_multipath 00:08:53.585 ************************************ 00:08:53.585 07:34:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:53.585 * Looking for test storage... 00:08:53.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.585 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:53.586 Cannot find device "nvmf_tgt_br" 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.586 Cannot find device "nvmf_tgt_br2" 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:53.586 Cannot find device "nvmf_tgt_br" 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:53.586 Cannot find device "nvmf_tgt_br2" 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:53.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:08:53.586 00:08:53.586 --- 10.0.0.2 ping statistics --- 00:08:53.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.586 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:53.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:08:53.586 00:08:53.586 --- 10.0.0.3 ping statistics --- 00:08:53.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.586 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:53.586 00:08:53.586 --- 10.0.0.1 ping statistics --- 00:08:53.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.586 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:53.586 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66570 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66570 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66570 ']' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.587 07:34:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.587 [2024-07-26 07:34:18.433292] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:08:53.587 [2024-07-26 07:34:18.433385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.587 [2024-07-26 07:34:18.571292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.587 [2024-07-26 07:34:18.721519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.587 [2024-07-26 07:34:18.721595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.587 [2024-07-26 07:34:18.721622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.587 [2024-07-26 07:34:18.721633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.587 [2024-07-26 07:34:18.721643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.587 [2024-07-26 07:34:18.721802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.587 [2024-07-26 07:34:18.722669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.587 [2024-07-26 07:34:18.722815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.587 [2024-07-26 07:34:18.722819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.587 [2024-07-26 07:34:18.801280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.845 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.845 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:53.845 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.845 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.845 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.103 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.103 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.361 [2024-07-26 07:34:19.714729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.361 07:34:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:54.620 Malloc0 00:08:54.620 07:34:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:54.879 07:34:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.137 07:34:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.137 [2024-07-26 07:34:20.712871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.137 07:34:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:55.395 [2024-07-26 07:34:20.928981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:55.395 07:34:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:55.654 07:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:55.654 07:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:55.654 07:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:55.654 07:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:55.654 07:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:55.654 07:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66660 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:58.186 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:58.186 [global] 00:08:58.186 thread=1 00:08:58.186 invalidate=1 00:08:58.186 rw=randrw 00:08:58.186 time_based=1 00:08:58.186 runtime=6 00:08:58.186 ioengine=libaio 00:08:58.186 direct=1 00:08:58.186 bs=4096 00:08:58.186 iodepth=128 00:08:58.186 norandommap=0 00:08:58.186 numjobs=1 00:08:58.186 00:08:58.186 verify_dump=1 00:08:58.186 verify_backlog=512 00:08:58.186 verify_state_save=0 00:08:58.186 do_verify=1 00:08:58.186 verify=crc32c-intel 00:08:58.186 [job0] 00:08:58.186 filename=/dev/nvme0n1 00:08:58.186 Could not set queue depth (nvme0n1) 00:08:58.186 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.186 fio-3.35 00:08:58.186 Starting 1 thread 00:08:58.753 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:59.011 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:59.270 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.271 07:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:59.529 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.787 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:59.788 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:59.788 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.788 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.788 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.788 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.788 07:34:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66660 00:09:03.977 00:09:03.977 job0: (groupid=0, jobs=1): err= 0: pid=66685: Fri Jul 26 07:34:29 2024 00:09:03.977 read: IOPS=9757, BW=38.1MiB/s (40.0MB/s)(229MiB/6006msec) 00:09:03.977 slat (usec): min=4, max=6124, avg=60.54, stdev=234.98 00:09:03.977 clat (usec): min=1902, max=15870, avg=8904.86, stdev=1529.50 00:09:03.977 lat (usec): min=1913, max=15881, avg=8965.41, stdev=1534.08 00:09:03.977 clat percentiles (usec): 00:09:03.977 | 1.00th=[ 4686], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8160], 00:09:03.977 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:09:03.977 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[12387], 00:09:03.977 | 99.00th=[13960], 99.50th=[14353], 99.90th=[14877], 99.95th=[15401], 00:09:03.977 | 99.99th=[15795] 00:09:03.977 bw ( KiB/s): min= 8512, max=24560, per=51.41%, avg=20064.73, stdev=5185.08, samples=11 00:09:03.977 iops : min= 2128, max= 6140, avg=5016.18, stdev=1296.27, samples=11 00:09:03.977 write: IOPS=5809, BW=22.7MiB/s (23.8MB/s)(120MiB/5288msec); 0 zone resets 00:09:03.977 slat (usec): min=14, max=3949, avg=69.82, stdev=175.44 00:09:03.977 clat (usec): min=2293, max=16038, avg=7813.71, stdev=1365.55 00:09:03.977 lat (usec): min=2316, max=16066, avg=7883.53, stdev=1370.67 00:09:03.977 clat percentiles (usec): 00:09:03.977 | 1.00th=[ 3523], 5.00th=[ 4621], 10.00th=[ 6390], 20.00th=[ 7308], 00:09:03.977 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:09:03.977 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:09:03.977 | 99.00th=[11863], 99.50th=[12518], 99.90th=[14091], 99.95th=[14877], 00:09:03.977 | 99.99th=[15926] 00:09:03.977 bw ( KiB/s): min= 8640, max=24152, per=86.67%, avg=20141.82, stdev=4968.52, samples=11 00:09:03.977 iops : min= 2160, max= 6038, avg=5035.45, stdev=1242.21, samples=11 00:09:03.977 lat (msec) : 2=0.01%, 4=1.09%, 10=90.10%, 20=8.81% 00:09:03.977 cpu : usr=5.35%, sys=20.27%, ctx=5214, majf=0, minf=108 00:09:03.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:03.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.977 issued rwts: total=58604,30720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.977 00:09:03.977 Run status group 0 (all jobs): 00:09:03.977 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=229MiB (240MB), run=6006-6006msec 00:09:03.977 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=120MiB (126MB), run=5288-5288msec 00:09:03.977 00:09:03.977 Disk stats (read/write): 00:09:03.977 nvme0n1: ios=57884/29943, merge=0/0, ticks=496789/220727, in_queue=717516, util=98.73% 00:09:03.977 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:04.544 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66767 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:04.544 07:34:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:04.544 [global] 00:09:04.544 thread=1 00:09:04.544 invalidate=1 00:09:04.544 rw=randrw 00:09:04.544 time_based=1 00:09:04.544 runtime=6 00:09:04.544 ioengine=libaio 00:09:04.544 direct=1 00:09:04.544 bs=4096 00:09:04.544 iodepth=128 00:09:04.544 norandommap=0 00:09:04.544 numjobs=1 00:09:04.544 00:09:04.544 verify_dump=1 00:09:04.544 verify_backlog=512 00:09:04.544 verify_state_save=0 00:09:04.544 do_verify=1 00:09:04.544 verify=crc32c-intel 00:09:04.544 [job0] 00:09:04.544 filename=/dev/nvme0n1 00:09:04.802 Could not set queue depth (nvme0n1) 00:09:04.802 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.802 fio-3.35 00:09:04.802 Starting 1 thread 00:09:05.737 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:05.995 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:06.254 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:06.512 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:06.771 07:34:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66767 00:09:10.958 00:09:10.958 job0: (groupid=0, jobs=1): err= 0: pid=66788: Fri Jul 26 07:34:36 2024 00:09:10.958 read: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(249MiB/6007msec) 00:09:10.958 slat (usec): min=6, max=6194, avg=48.40, stdev=204.65 00:09:10.958 clat (usec): min=307, max=20837, avg=8287.86, stdev=2239.07 00:09:10.958 lat (usec): min=333, max=20846, avg=8336.26, stdev=2247.96 00:09:10.958 clat percentiles (usec): 00:09:10.958 | 1.00th=[ 2802], 5.00th=[ 4359], 10.00th=[ 5276], 20.00th=[ 6980], 00:09:10.958 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:09:10.958 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10814], 95.00th=[12518], 00:09:10.958 | 99.00th=[14091], 99.50th=[15926], 99.90th=[18744], 99.95th=[19530], 00:09:10.958 | 99.99th=[20317] 00:09:10.958 bw ( KiB/s): min=12648, max=33069, per=52.21%, avg=22175.73, stdev=5675.37, samples=11 00:09:10.958 iops : min= 3162, max= 8267, avg=5543.91, stdev=1418.79, samples=11 00:09:10.958 write: IOPS=6265, BW=24.5MiB/s (25.7MB/s)(131MiB/5361msec); 0 zone resets 00:09:10.958 slat (usec): min=12, max=2990, avg=55.74, stdev=144.81 00:09:10.958 clat (usec): min=1069, max=18725, avg=6972.79, stdev=2075.16 00:09:10.958 lat (usec): min=1102, max=18748, avg=7028.53, stdev=2085.67 00:09:10.958 clat percentiles (usec): 00:09:10.958 | 1.00th=[ 2376], 5.00th=[ 3294], 10.00th=[ 3851], 20.00th=[ 4752], 00:09:10.958 | 30.00th=[ 6325], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:09:10.958 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8717], 95.00th=[ 9896], 00:09:10.958 | 99.00th=[11994], 99.50th=[12518], 99.90th=[16057], 99.95th=[17171], 00:09:10.958 | 99.99th=[18220] 00:09:10.958 bw ( KiB/s): min=12592, max=32463, per=88.52%, avg=22183.18, stdev=5456.24, samples=11 00:09:10.958 iops : min= 3148, max= 8115, avg=5545.73, stdev=1363.92, samples=11 00:09:10.958 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.05% 00:09:10.958 lat (msec) : 2=0.37%, 4=6.11%, 10=83.83%, 20=9.57%, 50=0.01% 00:09:10.958 cpu : usr=5.91%, sys=21.11%, ctx=5845, majf=0, minf=96 00:09:10.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:10.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.958 issued rwts: total=63788,33587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.958 00:09:10.958 Run status group 0 (all jobs): 00:09:10.958 READ: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=249MiB (261MB), run=6007-6007msec 00:09:10.958 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=131MiB (138MB), run=5361-5361msec 00:09:10.958 00:09:10.958 Disk stats (read/write): 00:09:10.958 nvme0n1: ios=63210/32794, merge=0/0, ticks=503498/214148, in_queue=717646, util=98.72% 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:10.958 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.217 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:11.217 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:11.217 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:11.217 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:11.217 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.217 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.476 rmmod nvme_tcp 00:09:11.476 rmmod nvme_fabrics 00:09:11.476 rmmod nvme_keyring 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66570 ']' 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66570 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66570 ']' 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66570 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66570 00:09:11.476 killing process with pid 66570 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66570' 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66570 00:09:11.476 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66570 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:11.736 00:09:11.736 real 0m19.338s 00:09:11.736 user 1m13.676s 00:09:11.736 sys 0m8.129s 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:11.736 ************************************ 00:09:11.736 END TEST nvmf_target_multipath 00:09:11.736 ************************************ 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.736 ************************************ 00:09:11.736 START TEST nvmf_zcopy 00:09:11.736 ************************************ 00:09:11.736 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.018 * Looking for test storage... 00:09:12.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.018 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:12.019 Cannot find device "nvmf_tgt_br" 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.019 Cannot find device "nvmf_tgt_br2" 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:12.019 Cannot find device "nvmf_tgt_br" 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:12.019 Cannot find device "nvmf_tgt_br2" 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.019 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:12.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:12.289 00:09:12.289 --- 10.0.0.2 ping statistics --- 00:09:12.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.289 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:12.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:12.289 00:09:12.289 --- 10.0.0.3 ping statistics --- 00:09:12.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.289 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:12.289 00:09:12.289 --- 10.0.0.1 ping statistics --- 00:09:12.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.289 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67034 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67034 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 67034 ']' 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.289 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.289 [2024-07-26 07:34:37.842234] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:09:12.289 [2024-07-26 07:34:37.842338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.548 [2024-07-26 07:34:37.977819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.548 [2024-07-26 07:34:38.089846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.548 [2024-07-26 07:34:38.089926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.548 [2024-07-26 07:34:38.089939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.548 [2024-07-26 07:34:38.089948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.548 [2024-07-26 07:34:38.089956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.548 [2024-07-26 07:34:38.089989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.808 [2024-07-26 07:34:38.164195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.374 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.375 [2024-07-26 07:34:38.802750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.375 [2024-07-26 07:34:38.818884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.375 malloc0 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.375 { 00:09:13.375 "params": { 00:09:13.375 "name": "Nvme$subsystem", 00:09:13.375 "trtype": "$TEST_TRANSPORT", 00:09:13.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.375 "adrfam": "ipv4", 00:09:13.375 "trsvcid": "$NVMF_PORT", 00:09:13.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.375 "hdgst": ${hdgst:-false}, 00:09:13.375 "ddgst": ${ddgst:-false} 00:09:13.375 }, 00:09:13.375 "method": "bdev_nvme_attach_controller" 00:09:13.375 } 00:09:13.375 EOF 00:09:13.375 )") 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:13.375 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.375 "params": { 00:09:13.375 "name": "Nvme1", 00:09:13.375 "trtype": "tcp", 00:09:13.375 "traddr": "10.0.0.2", 00:09:13.375 "adrfam": "ipv4", 00:09:13.375 "trsvcid": "4420", 00:09:13.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.375 "hdgst": false, 00:09:13.375 "ddgst": false 00:09:13.375 }, 00:09:13.375 "method": "bdev_nvme_attach_controller" 00:09:13.375 }' 00:09:13.375 [2024-07-26 07:34:38.908280] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:09:13.375 [2024-07-26 07:34:38.908357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67067 ] 00:09:13.634 [2024-07-26 07:34:39.045497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.634 [2024-07-26 07:34:39.188549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.893 [2024-07-26 07:34:39.273543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.893 Running I/O for 10 seconds... 00:09:23.863 00:09:23.863 Latency(us) 00:09:23.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:23.863 Verification LBA range: start 0x0 length 0x1000 00:09:23.863 Nvme1n1 : 10.01 6259.53 48.90 0.00 0.00 20384.78 2040.55 30742.34 00:09:23.863 =================================================================================================================== 00:09:23.863 Total : 6259.53 48.90 0.00 0.00 20384.78 2040.55 30742.34 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67189 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.122 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.122 { 00:09:24.122 "params": { 00:09:24.122 "name": "Nvme$subsystem", 00:09:24.122 "trtype": "$TEST_TRANSPORT", 00:09:24.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.122 "adrfam": "ipv4", 00:09:24.122 "trsvcid": "$NVMF_PORT", 00:09:24.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.122 "hdgst": ${hdgst:-false}, 00:09:24.122 "ddgst": ${ddgst:-false} 00:09:24.122 }, 00:09:24.122 "method": "bdev_nvme_attach_controller" 00:09:24.122 } 00:09:24.122 EOF 00:09:24.122 )") 00:09:24.382 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:24.382 [2024-07-26 07:34:49.727585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.727633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:24.382 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:24.382 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.382 "params": { 00:09:24.382 "name": "Nvme1", 00:09:24.382 "trtype": "tcp", 00:09:24.382 "traddr": "10.0.0.2", 00:09:24.382 "adrfam": "ipv4", 00:09:24.382 "trsvcid": "4420", 00:09:24.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.382 "hdgst": false, 00:09:24.382 "ddgst": false 00:09:24.382 }, 00:09:24.382 "method": "bdev_nvme_attach_controller" 00:09:24.382 }' 00:09:24.382 [2024-07-26 07:34:49.739555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.739581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.751548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.751574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.763552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.763577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.775541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.775566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.780196] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:09:24.382 [2024-07-26 07:34:49.780303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67189 ] 00:09:24.382 [2024-07-26 07:34:49.787557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.787590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.799558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.799583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.811540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.811564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.823545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.823569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.835545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.835569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.847547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.847571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.859550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.859574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.871552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.871575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.883555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.883578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.895578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.895601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.907563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.907587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.919565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.919589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.920194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.382 [2024-07-26 07:34:49.931584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.931611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.943585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.943610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.955579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.955602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.967609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.967633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-07-26 07:34:49.979610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-07-26 07:34:49.979636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:49.991620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:49.991648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.003617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.003642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.015617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.015642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.027625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.027650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.039311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.641 [2024-07-26 07:34:50.039627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.039643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.051627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.051652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.063640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.063669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.075641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.075667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.087644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.087672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.099649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.099676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.111655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.111683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.123131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.641 [2024-07-26 07:34:50.123666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.123689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.135663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.135692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.147673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.147701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.159664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.159690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.171665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.171691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.183711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.183759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.195717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.195763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.207727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.207773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.219747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.219776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.641 [2024-07-26 07:34:50.231766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.641 [2024-07-26 07:34:50.231814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.243778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.243811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 Running I/O for 5 seconds... 00:09:24.900 [2024-07-26 07:34:50.255781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.255809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.273059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.273108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.290062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.290110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.306671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.306705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.322638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.322687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.340108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.340157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.354743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.354790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.370515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.370561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.387751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.387800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.404644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.404693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.420188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.420236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.431409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.431456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.448539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.448587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.463898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.463946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.479143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.479190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.900 [2024-07-26 07:34:50.488719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.900 [2024-07-26 07:34:50.488766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.505169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.505204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.521919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.521967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.538460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.538550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.554696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.554742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.573214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.573261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.588102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.588149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.604006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.604054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.620499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.620545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.630146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.630193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.644790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.644824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.661895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.661943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.678015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.678046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.687208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.687254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.702623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.702672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.719765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.719815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.736382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.736414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.159 [2024-07-26 07:34:50.753940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.159 [2024-07-26 07:34:50.753989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.768432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.768487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.783976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.784042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.801293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.801325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.817329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.817363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.835060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.835109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.850133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.850180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.867549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.867580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.882276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.882324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.898174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.898222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.914857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.914905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.931596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.931628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.941397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.941430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.956780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.956814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.966632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.966664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.983035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.983067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:50.992529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:50.992576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.418 [2024-07-26 07:34:51.008295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.418 [2024-07-26 07:34:51.008343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.024889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.024954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.041171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.041204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.056945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.056993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.065997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.066044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.082362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.082410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.099427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.099475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.115460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.115540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.132629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.132662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.148740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.148775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.166083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.166130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.183544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.183575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.200180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.200244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.217825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.217860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.232373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.232421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.249637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.249669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.677 [2024-07-26 07:34:51.262775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.677 [2024-07-26 07:34:51.262809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.278514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.278558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.295371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.295419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.312366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.312413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.328332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.328365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.345248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.345282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.361030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.361079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.370609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.370657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.385765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.385813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.400725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.400757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.412233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.412281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.429371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.429405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.444062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.444112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.460042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.460074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.479033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.479081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.493638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.493685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.503303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.503350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.518272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.518320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.936 [2024-07-26 07:34:51.527604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.936 [2024-07-26 07:34:51.527636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.194 [2024-07-26 07:34:51.543212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.194 [2024-07-26 07:34:51.543259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.194 [2024-07-26 07:34:51.558452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.194 [2024-07-26 07:34:51.558530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.194 [2024-07-26 07:34:51.567812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.194 [2024-07-26 07:34:51.567844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.583385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.583418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.602143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.602175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.616529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.616577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.626618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.626666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.641291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.641324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.651304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.651354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.665883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.665930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.675403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.675451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.691399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.691448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.700447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.700524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.716997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.717045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.733865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.733913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.751039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.751086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.766766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.766816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.776666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.776698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.195 [2024-07-26 07:34:51.791893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.195 [2024-07-26 07:34:51.791941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.808622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.808670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.824255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.824303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.833540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.833586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.849670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.849717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.866232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.866280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.883022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.883069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.899736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.899784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.916507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.916583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.934096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.934144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.950748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.950796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.967867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.967915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:51.985033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:51.985082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:52.001426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:52.001498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:52.018990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:52.019038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:52.033316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:52.033349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.454 [2024-07-26 07:34:52.049431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.454 [2024-07-26 07:34:52.049507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.066254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.066304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.082947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.082995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.099308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.099356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.115229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.115277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.124066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.124114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.139549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.139598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.155235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.155284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.172083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.172115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.188413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.188475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.207492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.207538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.222113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.222145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.233685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.233733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.250718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.250765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.266957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.267004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.284251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.284301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.713 [2024-07-26 07:34:52.301186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.713 [2024-07-26 07:34:52.301220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.317702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.317732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.336140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.336187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.350547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.350594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.366063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.366110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.384911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.384960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.399302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.399350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.411403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.411451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.428240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.428287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.442411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.442458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.457758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.457804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.467176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.467223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.483249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.483299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.493053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.493086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.509246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.509287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.525633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.525666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.542239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.542272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.972 [2024-07-26 07:34:52.559478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.972 [2024-07-26 07:34:52.559559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.575398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.575431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.594006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.594039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.608632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.608665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.623723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.623756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.633013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.633045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.649243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.649276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.666294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.666342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.683471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.683531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.698145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.698177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.713451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.713516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.722893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.722940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.738570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.738617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.755505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.755548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.773107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.773184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.789287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.789319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.806743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.806777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.230 [2024-07-26 07:34:52.822530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.230 [2024-07-26 07:34:52.822563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.831985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.832018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.847240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.847288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.856956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.857004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.873157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.873190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.889714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.889762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.907003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.907052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.921103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.921161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.938392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.938440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.952679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.952727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.967400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.967449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.984407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.984456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:52.999161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:52.999208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:53.010565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:53.010613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:53.027769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:53.027803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:53.043096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:53.043145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:53.059889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:53.059937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:53.075425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:53.075472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.489 [2024-07-26 07:34:53.084600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.489 [2024-07-26 07:34:53.084632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.747 [2024-07-26 07:34:53.100621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.747 [2024-07-26 07:34:53.100669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.747 [2024-07-26 07:34:53.110052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.747 [2024-07-26 07:34:53.110100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.747 [2024-07-26 07:34:53.126024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.747 [2024-07-26 07:34:53.126056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.747 [2024-07-26 07:34:53.141711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.747 [2024-07-26 07:34:53.141759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.747 [2024-07-26 07:34:53.160306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.160354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.174930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.174960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.191852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.191883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.208271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.208319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.224717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.224749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.242106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.242139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.256653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.256700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.272176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.272225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.290699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.290732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.305222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.305254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.320520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.320567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.330433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.330492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.748 [2024-07-26 07:34:53.345185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.748 [2024-07-26 07:34:53.345218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.362607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.362639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.379668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.379716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.396442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.396519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.413261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.413294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.429095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.429152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.438468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.438528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.453815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.453863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.465719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.465766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.482282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.482330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.499172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.499220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.515956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.515987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.532079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.532128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.548877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.548909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.566065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.566114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.582309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.582343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.005 [2024-07-26 07:34:53.600990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.005 [2024-07-26 07:34:53.601038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.616176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.616224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.625837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.625902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.641454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.641527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.658439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.658501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.674624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.674655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.691598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.691646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.708375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.708424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.726060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.726107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.740572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.740622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.757326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.757359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.278 [2024-07-26 07:34:53.772839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.278 [2024-07-26 07:34:53.772902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.279 [2024-07-26 07:34:53.788666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.279 [2024-07-26 07:34:53.788714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.279 [2024-07-26 07:34:53.805469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.279 [2024-07-26 07:34:53.805534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.279 [2024-07-26 07:34:53.822153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.279 [2024-07-26 07:34:53.822201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.279 [2024-07-26 07:34:53.838831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.279 [2024-07-26 07:34:53.838895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.279 [2024-07-26 07:34:53.854590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.279 [2024-07-26 07:34:53.854628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.279 [2024-07-26 07:34:53.873714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.279 [2024-07-26 07:34:53.873746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.887993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.888025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.903783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.903817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.921552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.921585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.936074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.936122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.952902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.952950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.969058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.969105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.979387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.979434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:53.994036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:53.994082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.006264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.006311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.023205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.023252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.037648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.037694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.053258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.053308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.069643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.069690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.080877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.080940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.096025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.096056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.114036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.114083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 [2024-07-26 07:34:54.128736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.537 [2024-07-26 07:34:54.128768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.144277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.144325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.153266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.153298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.169674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.169722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.186177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.186241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.201990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.202020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.211310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.211357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.227330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.227378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.242914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.242950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.258404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.258440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.275917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.275949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.292468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.292543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.309510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.309540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.325823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.325856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.342795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.342845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.358763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.358797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.375768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.375815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.796 [2024-07-26 07:34:54.391900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.796 [2024-07-26 07:34:54.391948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.408878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.408925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.425171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.425200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.442194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.442242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.458145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.458195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.476433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.476509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.491416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.491464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.503261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.503308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.518405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.518454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.528093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.528140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.544088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.544137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.559434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.559496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.568978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.569026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.584271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.584319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.601201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.601233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.618111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.618159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.633784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.633846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.055 [2024-07-26 07:34:54.643037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.055 [2024-07-26 07:34:54.643069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.659547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.659580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.675371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.675419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.692461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.692535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.709405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.709452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.725416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.725476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.744165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.744213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.759267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.759299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.776391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.776440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.792171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.792220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.811125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.811173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.825341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.825373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.841259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.841292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.858238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.858270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.875060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.875109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.890404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.890451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-07-26 07:34:54.899776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-07-26 07:34:54.899824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:54.916338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:54.916372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:54.931985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:54.932032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:54.947557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:54.947585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:54.967063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:54.967111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:54.982295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:54.982329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:54.999644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:54.999691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.015935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.015983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.032976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.033007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.049877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.049925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.067217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.067265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.082909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.082943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.092404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.092452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.108018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.108067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.125817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.125849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.142216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.142263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.573 [2024-07-26 07:34:55.161097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.573 [2024-07-26 07:34:55.161169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.175558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.175588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.185181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.185215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.201493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.201567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.219209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.219257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.234797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.234830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.251989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.252037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 00:09:29.832 Latency(us) 00:09:29.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.832 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:29.832 Nvme1n1 : 5.01 11939.42 93.28 0.00 0.00 10709.52 4676.89 19779.96 00:09:29.832 =================================================================================================================== 00:09:29.832 Total : 11939.42 93.28 0.00 0.00 10709.52 4676.89 19779.96 00:09:29.832 [2024-07-26 07:34:55.263143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.263174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.275125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.275158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.287117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.287143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.299142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.299197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.311142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.311191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.323145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.323195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.335149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.335197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.832 [2024-07-26 07:34:55.347158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.832 [2024-07-26 07:34:55.347208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.359156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.359204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.371159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.371208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.383165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.383213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.395173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.395220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.407168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.407216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.419171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.419217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.833 [2024-07-26 07:34:55.431187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.833 [2024-07-26 07:34:55.431234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.091 [2024-07-26 07:34:55.443179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.091 [2024-07-26 07:34:55.443228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.091 [2024-07-26 07:34:55.455172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.091 [2024-07-26 07:34:55.455198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.091 [2024-07-26 07:34:55.467169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.467192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.479175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.479199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.491202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.491253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.503192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.503220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.515184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.515208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.527187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.527211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.539218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.539271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.551200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.551226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 [2024-07-26 07:34:55.563194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.092 [2024-07-26 07:34:55.563217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.092 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67189) - No such process 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67189 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 delay0 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 07:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:30.350 [2024-07-26 07:34:55.767034] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:36.921 Initializing NVMe Controllers 00:09:36.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:36.921 Initialization complete. Launching workers. 00:09:36.921 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 334 00:09:36.921 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 621, failed to submit 33 00:09:36.921 success 488, unsuccess 133, failed 0 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.921 rmmod nvme_tcp 00:09:36.921 rmmod nvme_fabrics 00:09:36.921 rmmod nvme_keyring 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67034 ']' 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67034 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 67034 ']' 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 67034 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.921 07:35:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67034 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:36.921 killing process with pid 67034 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67034' 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 67034 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 67034 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:36.921 00:09:36.921 real 0m25.027s 00:09:36.921 user 0m40.986s 00:09:36.921 sys 0m6.994s 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.921 ************************************ 00:09:36.921 END TEST nvmf_zcopy 00:09:36.921 ************************************ 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.921 ************************************ 00:09:36.921 START TEST nvmf_nmic 00:09:36.921 ************************************ 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:36.921 * Looking for test storage... 00:09:36.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:36.921 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.922 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:37.181 Cannot find device "nvmf_tgt_br" 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.181 Cannot find device "nvmf_tgt_br2" 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:37.181 Cannot find device "nvmf_tgt_br" 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:37.181 Cannot find device "nvmf_tgt_br2" 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:37.181 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.440 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:37.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:09:37.441 00:09:37.441 --- 10.0.0.2 ping statistics --- 00:09:37.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.441 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:37.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:37.441 00:09:37.441 --- 10.0.0.3 ping statistics --- 00:09:37.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.441 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:09:37.441 00:09:37.441 --- 10.0.0.1 ping statistics --- 00:09:37.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.441 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67520 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67520 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67520 ']' 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.441 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.441 [2024-07-26 07:35:02.917973] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:09:37.441 [2024-07-26 07:35:02.918063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.700 [2024-07-26 07:35:03.057789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.700 [2024-07-26 07:35:03.198422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.700 [2024-07-26 07:35:03.198513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.700 [2024-07-26 07:35:03.198536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.700 [2024-07-26 07:35:03.198548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.700 [2024-07-26 07:35:03.198564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.700 [2024-07-26 07:35:03.198718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.700 [2024-07-26 07:35:03.198862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.700 [2024-07-26 07:35:03.199016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.700 [2024-07-26 07:35:03.199022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.700 [2024-07-26 07:35:03.278269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.630 [2024-07-26 07:35:03.960213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.630 07:35:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.630 Malloc0 00:09:38.630 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.630 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 [2024-07-26 07:35:04.049958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.631 test case1: single bdev can't be used in multiple subsystems 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 [2024-07-26 07:35:04.074076] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:38.631 [2024-07-26 07:35:04.074128] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:38.631 [2024-07-26 07:35:04.074141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.631 request: 00:09:38.631 { 00:09:38.631 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:38.631 "namespace": { 00:09:38.631 "bdev_name": "Malloc0", 00:09:38.631 "no_auto_visible": false 00:09:38.631 }, 00:09:38.631 "method": "nvmf_subsystem_add_ns", 00:09:38.631 "req_id": 1 00:09:38.631 } 00:09:38.631 Got JSON-RPC error response 00:09:38.631 response: 00:09:38.631 { 00:09:38.631 "code": -32602, 00:09:38.631 "message": "Invalid parameters" 00:09:38.631 } 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:38.631 Adding namespace failed - expected result. 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:38.631 test case2: host connect to nvmf target in multiple paths 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.631 [2024-07-26 07:35:04.085878] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.631 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:38.888 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.888 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.888 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.888 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:38.888 07:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:40.786 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.786 [global] 00:09:40.786 thread=1 00:09:40.786 invalidate=1 00:09:40.786 rw=write 00:09:40.786 time_based=1 00:09:40.786 runtime=1 00:09:40.786 ioengine=libaio 00:09:40.786 direct=1 00:09:40.786 bs=4096 00:09:40.786 iodepth=1 00:09:40.786 norandommap=0 00:09:40.786 numjobs=1 00:09:40.786 00:09:40.786 verify_dump=1 00:09:40.786 verify_backlog=512 00:09:40.786 verify_state_save=0 00:09:40.786 do_verify=1 00:09:40.786 verify=crc32c-intel 00:09:41.044 [job0] 00:09:41.044 filename=/dev/nvme0n1 00:09:41.044 Could not set queue depth (nvme0n1) 00:09:41.044 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.044 fio-3.35 00:09:41.044 Starting 1 thread 00:09:42.419 00:09:42.419 job0: (groupid=0, jobs=1): err= 0: pid=67612: Fri Jul 26 07:35:07 2024 00:09:42.419 read: IOPS=2095, BW=8384KiB/s (8585kB/s)(8392KiB/1001msec) 00:09:42.419 slat (nsec): min=13893, max=51164, avg=16213.52, stdev=3187.96 00:09:42.419 clat (usec): min=168, max=330, avg=244.55, stdev=29.22 00:09:42.419 lat (usec): min=182, max=345, avg=260.77, stdev=29.10 00:09:42.419 clat percentiles (usec): 00:09:42.419 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 219], 00:09:42.419 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:09:42.419 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:09:42.419 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 330], 00:09:42.419 | 99.99th=[ 330] 00:09:42.419 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:42.419 slat (usec): min=19, max=115, avg=24.25, stdev= 6.31 00:09:42.419 clat (usec): min=94, max=284, avg=149.43, stdev=25.69 00:09:42.419 lat (usec): min=117, max=323, avg=173.67, stdev=26.02 00:09:42.419 clat percentiles (usec): 00:09:42.419 | 1.00th=[ 102], 5.00th=[ 113], 10.00th=[ 120], 20.00th=[ 131], 00:09:42.419 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:09:42.419 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 198], 00:09:42.419 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 265], 99.95th=[ 269], 00:09:42.419 | 99.99th=[ 285] 00:09:42.419 bw ( KiB/s): min=10680, max=10680, per=100.00%, avg=10680.00, stdev= 0.00, samples=1 00:09:42.419 iops : min= 2670, max= 2670, avg=2670.00, stdev= 0.00, samples=1 00:09:42.419 lat (usec) : 100=0.30%, 250=77.99%, 500=21.70% 00:09:42.419 cpu : usr=1.80%, sys=7.20%, ctx=4658, majf=0, minf=2 00:09:42.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.419 issued rwts: total=2098,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.419 00:09:42.419 Run status group 0 (all jobs): 00:09:42.419 READ: bw=8384KiB/s (8585kB/s), 8384KiB/s-8384KiB/s (8585kB/s-8585kB/s), io=8392KiB (8593kB), run=1001-1001msec 00:09:42.419 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:42.419 00:09:42.419 Disk stats (read/write): 00:09:42.419 nvme0n1: ios=2095/2048, merge=0/0, ticks=543/329, in_queue=872, util=91.68% 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.419 rmmod nvme_tcp 00:09:42.419 rmmod nvme_fabrics 00:09:42.419 rmmod nvme_keyring 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67520 ']' 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67520 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67520 ']' 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67520 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67520 00:09:42.419 killing process with pid 67520 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67520' 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67520 00:09:42.419 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67520 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.678 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:42.938 ************************************ 00:09:42.938 END TEST nvmf_nmic 00:09:42.938 ************************************ 00:09:42.938 00:09:42.938 real 0m5.876s 00:09:42.938 user 0m19.031s 00:09:42.938 sys 0m1.969s 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.938 ************************************ 00:09:42.938 START TEST nvmf_fio_target 00:09:42.938 ************************************ 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:42.938 * Looking for test storage... 00:09:42.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.938 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:42.939 Cannot find device "nvmf_tgt_br" 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.939 Cannot find device "nvmf_tgt_br2" 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:42.939 Cannot find device "nvmf_tgt_br" 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:42.939 Cannot find device "nvmf_tgt_br2" 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:42.939 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:43.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:43.198 00:09:43.198 --- 10.0.0.2 ping statistics --- 00:09:43.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.198 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:43.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:43.198 00:09:43.198 --- 10.0.0.3 ping statistics --- 00:09:43.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.198 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:43.198 00:09:43.198 --- 10.0.0.1 ping statistics --- 00:09:43.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.198 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.198 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67788 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67788 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67788 ']' 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.199 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.457 [2024-07-26 07:35:08.835564] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:09:43.457 [2024-07-26 07:35:08.835653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.457 [2024-07-26 07:35:08.975643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.716 [2024-07-26 07:35:09.094417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.716 [2024-07-26 07:35:09.094780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.716 [2024-07-26 07:35:09.094981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.716 [2024-07-26 07:35:09.095122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.716 [2024-07-26 07:35:09.095159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.716 [2024-07-26 07:35:09.095440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.716 [2024-07-26 07:35:09.095717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.716 [2024-07-26 07:35:09.095705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.716 [2024-07-26 07:35:09.095623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.716 [2024-07-26 07:35:09.172350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.282 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:44.540 [2024-07-26 07:35:10.103242] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.540 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.107 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:45.107 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.364 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:45.364 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.623 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:45.623 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.880 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:45.881 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:46.139 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.397 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:46.397 07:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.655 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:46.655 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.912 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:46.912 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:47.170 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.428 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.428 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.686 07:35:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.686 07:35:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.948 07:35:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.206 [2024-07-26 07:35:13.614188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.206 07:35:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:48.464 07:35:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:48.722 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:50.623 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:50.881 [global] 00:09:50.881 thread=1 00:09:50.881 invalidate=1 00:09:50.881 rw=write 00:09:50.881 time_based=1 00:09:50.881 runtime=1 00:09:50.881 ioengine=libaio 00:09:50.881 direct=1 00:09:50.881 bs=4096 00:09:50.881 iodepth=1 00:09:50.881 norandommap=0 00:09:50.881 numjobs=1 00:09:50.881 00:09:50.881 verify_dump=1 00:09:50.881 verify_backlog=512 00:09:50.881 verify_state_save=0 00:09:50.881 do_verify=1 00:09:50.881 verify=crc32c-intel 00:09:50.881 [job0] 00:09:50.881 filename=/dev/nvme0n1 00:09:50.881 [job1] 00:09:50.881 filename=/dev/nvme0n2 00:09:50.881 [job2] 00:09:50.881 filename=/dev/nvme0n3 00:09:50.881 [job3] 00:09:50.881 filename=/dev/nvme0n4 00:09:50.881 Could not set queue depth (nvme0n1) 00:09:50.881 Could not set queue depth (nvme0n2) 00:09:50.881 Could not set queue depth (nvme0n3) 00:09:50.881 Could not set queue depth (nvme0n4) 00:09:50.881 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.881 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.881 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.881 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.881 fio-3.35 00:09:50.881 Starting 4 threads 00:09:52.256 00:09:52.256 job0: (groupid=0, jobs=1): err= 0: pid=67975: Fri Jul 26 07:35:17 2024 00:09:52.256 read: IOPS=1402, BW=5610KiB/s (5745kB/s)(5616KiB/1001msec) 00:09:52.256 slat (nsec): min=15660, max=69376, avg=23133.58, stdev=7796.97 00:09:52.256 clat (usec): min=179, max=687, avg=363.91, stdev=86.35 00:09:52.256 lat (usec): min=194, max=718, avg=387.04, stdev=91.37 00:09:52.256 clat percentiles (usec): 00:09:52.256 | 1.00th=[ 210], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:09:52.256 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 351], 00:09:52.257 | 70.00th=[ 363], 80.00th=[ 412], 90.00th=[ 523], 95.00th=[ 570], 00:09:52.257 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 676], 99.95th=[ 685], 00:09:52.257 | 99.99th=[ 685] 00:09:52.257 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:52.257 slat (usec): min=21, max=104, avg=34.61, stdev=12.42 00:09:52.257 clat (usec): min=104, max=719, avg=257.03, stdev=107.61 00:09:52.257 lat (usec): min=128, max=756, avg=291.64, stdev=117.01 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 117], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 157], 00:09:52.257 | 30.00th=[ 174], 40.00th=[ 196], 50.00th=[ 225], 60.00th=[ 260], 00:09:52.257 | 70.00th=[ 302], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 445], 00:09:52.257 | 99.00th=[ 510], 99.50th=[ 570], 99.90th=[ 676], 99.95th=[ 717], 00:09:52.257 | 99.99th=[ 717] 00:09:52.257 bw ( KiB/s): min= 6952, max= 6952, per=28.98%, avg=6952.00, stdev= 0.00, samples=1 00:09:52.257 iops : min= 1738, max= 1738, avg=1738.00, stdev= 0.00, samples=1 00:09:52.257 lat (usec) : 250=30.61%, 500=63.50%, 750=5.88% 00:09:52.257 cpu : usr=2.10%, sys=6.40%, ctx=2940, majf=0, minf=12 00:09:52.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 issued rwts: total=1404,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.257 job1: (groupid=0, jobs=1): err= 0: pid=67976: Fri Jul 26 07:35:17 2024 00:09:52.257 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:52.257 slat (nsec): min=18915, max=78137, avg=32880.63, stdev=11287.84 00:09:52.257 clat (usec): min=225, max=1134, avg=451.50, stdev=104.76 00:09:52.257 lat (usec): min=254, max=1160, avg=484.38, stdev=111.54 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 297], 5.00th=[ 347], 10.00th=[ 363], 20.00th=[ 379], 00:09:52.257 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 433], 00:09:52.257 | 70.00th=[ 457], 80.00th=[ 515], 90.00th=[ 619], 95.00th=[ 693], 00:09:52.257 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 799], 99.95th=[ 1139], 00:09:52.257 | 99.99th=[ 1139] 00:09:52.257 write: IOPS=1348, BW=5395KiB/s (5524kB/s)(5400KiB/1001msec); 0 zone resets 00:09:52.257 slat (usec): min=25, max=103, avg=41.76, stdev=12.33 00:09:52.257 clat (usec): min=126, max=3916, avg=324.75, stdev=160.88 00:09:52.257 lat (usec): min=159, max=3964, avg=366.51, stdev=167.67 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 194], 00:09:52.257 | 30.00th=[ 223], 40.00th=[ 265], 50.00th=[ 293], 60.00th=[ 334], 00:09:52.257 | 70.00th=[ 388], 80.00th=[ 474], 90.00th=[ 515], 95.00th=[ 537], 00:09:52.257 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 971], 99.95th=[ 3916], 00:09:52.257 | 99.99th=[ 3916] 00:09:52.257 bw ( KiB/s): min= 4832, max= 4832, per=20.14%, avg=4832.00, stdev= 0.00, samples=1 00:09:52.257 iops : min= 1208, max= 1208, avg=1208.00, stdev= 0.00, samples=1 00:09:52.257 lat (usec) : 250=20.64%, 500=62.76%, 750=15.75%, 1000=0.76% 00:09:52.257 lat (msec) : 2=0.04%, 4=0.04% 00:09:52.257 cpu : usr=2.30%, sys=6.80%, ctx=2375, majf=0, minf=11 00:09:52.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 issued rwts: total=1024,1350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.257 job2: (groupid=0, jobs=1): err= 0: pid=67977: Fri Jul 26 07:35:17 2024 00:09:52.257 read: IOPS=1095, BW=4383KiB/s (4488kB/s)(4392KiB/1002msec) 00:09:52.257 slat (nsec): min=17711, max=75862, avg=28732.95, stdev=7690.41 00:09:52.257 clat (usec): min=230, max=1419, avg=455.12, stdev=110.11 00:09:52.257 lat (usec): min=256, max=1460, avg=483.85, stdev=114.36 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 293], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 379], 00:09:52.257 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 437], 00:09:52.257 | 70.00th=[ 453], 80.00th=[ 494], 90.00th=[ 652], 95.00th=[ 685], 00:09:52.257 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 799], 99.95th=[ 1418], 00:09:52.257 | 99.99th=[ 1418] 00:09:52.257 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:09:52.257 slat (usec): min=24, max=121, avg=36.75, stdev= 7.76 00:09:52.257 clat (usec): min=139, max=575, avg=263.59, stdev=62.53 00:09:52.257 lat (usec): min=169, max=661, avg=300.34, stdev=64.34 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 202], 00:09:52.257 | 30.00th=[ 221], 40.00th=[ 243], 50.00th=[ 265], 60.00th=[ 277], 00:09:52.257 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 379], 00:09:52.257 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 498], 99.95th=[ 578], 00:09:52.257 | 99.99th=[ 578] 00:09:52.257 bw ( KiB/s): min= 5600, max= 6688, per=25.61%, avg=6144.00, stdev=769.33, samples=2 00:09:52.257 iops : min= 1400, max= 1672, avg=1536.00, stdev=192.33, samples=2 00:09:52.257 lat (usec) : 250=25.44%, 500=66.29%, 750=7.90%, 1000=0.34% 00:09:52.257 lat (msec) : 2=0.04% 00:09:52.257 cpu : usr=2.30%, sys=6.49%, ctx=2650, majf=0, minf=5 00:09:52.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 issued rwts: total=1098,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.257 job3: (groupid=0, jobs=1): err= 0: pid=67978: Fri Jul 26 07:35:17 2024 00:09:52.257 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:52.257 slat (nsec): min=16268, max=55469, avg=23913.07, stdev=5906.54 00:09:52.257 clat (usec): min=247, max=3763, avg=381.36, stdev=134.11 00:09:52.257 lat (usec): min=266, max=3793, avg=405.28, stdev=137.00 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 306], 00:09:52.257 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 351], 00:09:52.257 | 70.00th=[ 375], 80.00th=[ 461], 90.00th=[ 570], 95.00th=[ 594], 00:09:52.257 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 1074], 99.95th=[ 3752], 00:09:52.257 | 99.99th=[ 3752] 00:09:52.257 write: IOPS=1585, BW=6342KiB/s (6494kB/s)(6348KiB/1001msec); 0 zone resets 00:09:52.257 slat (nsec): min=19866, max=84042, avg=26740.25, stdev=4456.42 00:09:52.257 clat (usec): min=104, max=659, avg=205.93, stdev=54.74 00:09:52.257 lat (usec): min=129, max=706, avg=232.67, stdev=55.73 00:09:52.257 clat percentiles (usec): 00:09:52.257 | 1.00th=[ 119], 5.00th=[ 139], 10.00th=[ 149], 20.00th=[ 159], 00:09:52.257 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 200], 60.00th=[ 219], 00:09:52.257 | 70.00th=[ 233], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 285], 00:09:52.257 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 627], 99.95th=[ 660], 00:09:52.257 | 99.99th=[ 660] 00:09:52.257 bw ( KiB/s): min= 8192, max= 8192, per=34.15%, avg=8192.00, stdev= 0.00, samples=1 00:09:52.257 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:52.257 lat (usec) : 250=41.27%, 500=49.86%, 750=8.81% 00:09:52.257 lat (msec) : 2=0.03%, 4=0.03% 00:09:52.257 cpu : usr=1.70%, sys=6.30%, ctx=3123, majf=0, minf=7 00:09:52.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.257 issued rwts: total=1536,1587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.257 00:09:52.257 Run status group 0 (all jobs): 00:09:52.257 READ: bw=19.7MiB/s (20.7MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=19.8MiB (20.7MB), run=1001-1002msec 00:09:52.257 WRITE: bw=23.4MiB/s (24.6MB/s), 5395KiB/s-6342KiB/s (5524kB/s-6494kB/s), io=23.5MiB (24.6MB), run=1001-1002msec 00:09:52.257 00:09:52.257 Disk stats (read/write): 00:09:52.257 nvme0n1: ios=1074/1500, merge=0/0, ticks=423/398, in_queue=821, util=88.08% 00:09:52.257 nvme0n2: ios=1014/1024, merge=0/0, ticks=468/361, in_queue=829, util=87.72% 00:09:52.257 nvme0n3: ios=1024/1184, merge=0/0, ticks=467/333, in_queue=800, util=88.98% 00:09:52.257 nvme0n4: ios=1167/1536, merge=0/0, ticks=470/331, in_queue=801, util=89.32% 00:09:52.257 07:35:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:52.257 [global] 00:09:52.257 thread=1 00:09:52.257 invalidate=1 00:09:52.257 rw=randwrite 00:09:52.257 time_based=1 00:09:52.257 runtime=1 00:09:52.257 ioengine=libaio 00:09:52.257 direct=1 00:09:52.257 bs=4096 00:09:52.257 iodepth=1 00:09:52.257 norandommap=0 00:09:52.257 numjobs=1 00:09:52.257 00:09:52.257 verify_dump=1 00:09:52.257 verify_backlog=512 00:09:52.257 verify_state_save=0 00:09:52.257 do_verify=1 00:09:52.257 verify=crc32c-intel 00:09:52.257 [job0] 00:09:52.257 filename=/dev/nvme0n1 00:09:52.257 [job1] 00:09:52.257 filename=/dev/nvme0n2 00:09:52.257 [job2] 00:09:52.257 filename=/dev/nvme0n3 00:09:52.257 [job3] 00:09:52.257 filename=/dev/nvme0n4 00:09:52.257 Could not set queue depth (nvme0n1) 00:09:52.257 Could not set queue depth (nvme0n2) 00:09:52.257 Could not set queue depth (nvme0n3) 00:09:52.257 Could not set queue depth (nvme0n4) 00:09:52.257 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.257 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.257 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.257 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.257 fio-3.35 00:09:52.257 Starting 4 threads 00:09:53.632 00:09:53.632 job0: (groupid=0, jobs=1): err= 0: pid=68037: Fri Jul 26 07:35:18 2024 00:09:53.632 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:53.632 slat (nsec): min=17178, max=64909, avg=29428.13, stdev=7223.00 00:09:53.632 clat (usec): min=242, max=2168, avg=481.31, stdev=123.65 00:09:53.632 lat (usec): min=271, max=2187, avg=510.74, stdev=126.43 00:09:53.632 clat percentiles (usec): 00:09:53.632 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 396], 00:09:53.632 | 30.00th=[ 412], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 457], 00:09:53.632 | 70.00th=[ 523], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 701], 00:09:53.632 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 1188], 99.95th=[ 2180], 00:09:53.632 | 99.99th=[ 2180] 00:09:53.632 write: IOPS=1151, BW=4607KiB/s (4718kB/s)(4612KiB/1001msec); 0 zone resets 00:09:53.632 slat (usec): min=24, max=217, avg=44.84, stdev=12.45 00:09:53.632 clat (usec): min=140, max=658, avg=360.65, stdev=129.39 00:09:53.632 lat (usec): min=179, max=712, avg=405.49, stdev=136.80 00:09:53.632 clat percentiles (usec): 00:09:53.632 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 215], 00:09:53.632 | 30.00th=[ 277], 40.00th=[ 326], 50.00th=[ 367], 60.00th=[ 388], 00:09:53.632 | 70.00th=[ 433], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 562], 00:09:53.632 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 644], 99.95th=[ 660], 00:09:53.632 | 99.99th=[ 660] 00:09:53.632 bw ( KiB/s): min= 4592, max= 4592, per=14.72%, avg=4592.00, stdev= 0.00, samples=1 00:09:53.632 iops : min= 1148, max= 1148, avg=1148.00, stdev= 0.00, samples=1 00:09:53.632 lat (usec) : 250=13.37%, 500=60.77%, 750=24.58%, 1000=1.19% 00:09:53.632 lat (msec) : 2=0.05%, 4=0.05% 00:09:53.632 cpu : usr=2.10%, sys=6.60%, ctx=2178, majf=0, minf=20 00:09:53.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.632 issued rwts: total=1024,1153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.632 job1: (groupid=0, jobs=1): err= 0: pid=68038: Fri Jul 26 07:35:18 2024 00:09:53.632 read: IOPS=2432, BW=9730KiB/s (9964kB/s)(9740KiB/1001msec) 00:09:53.632 slat (nsec): min=11377, max=34682, avg=14656.00, stdev=2280.04 00:09:53.632 clat (usec): min=144, max=424, avg=206.37, stdev=27.77 00:09:53.632 lat (usec): min=157, max=437, avg=221.03, stdev=27.92 00:09:53.632 clat percentiles (usec): 00:09:53.632 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 184], 00:09:53.632 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 212], 00:09:53.632 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 253], 00:09:53.633 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 371], 99.95th=[ 388], 00:09:53.633 | 99.99th=[ 424] 00:09:53.633 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:53.633 slat (usec): min=14, max=109, avg=22.70, stdev= 5.59 00:09:53.633 clat (usec): min=97, max=765, avg=154.20, stdev=27.59 00:09:53.633 lat (usec): min=115, max=799, avg=176.90, stdev=28.84 00:09:53.633 clat percentiles (usec): 00:09:53.633 | 1.00th=[ 109], 5.00th=[ 119], 10.00th=[ 125], 20.00th=[ 133], 00:09:53.633 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:09:53.633 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:09:53.633 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 330], 99.95th=[ 343], 00:09:53.633 | 99.99th=[ 766] 00:09:53.633 bw ( KiB/s): min=11992, max=11992, per=38.43%, avg=11992.00, stdev= 0.00, samples=1 00:09:53.633 iops : min= 2998, max= 2998, avg=2998.00, stdev= 0.00, samples=1 00:09:53.633 lat (usec) : 100=0.04%, 250=96.58%, 500=3.36%, 1000=0.02% 00:09:53.633 cpu : usr=2.60%, sys=6.80%, ctx=4995, majf=0, minf=7 00:09:53.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.633 issued rwts: total=2435,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.633 job2: (groupid=0, jobs=1): err= 0: pid=68039: Fri Jul 26 07:35:18 2024 00:09:53.633 read: IOPS=2374, BW=9499KiB/s (9726kB/s)(9508KiB/1001msec) 00:09:53.633 slat (nsec): min=11196, max=33041, avg=13074.33, stdev=2083.53 00:09:53.633 clat (usec): min=146, max=595, avg=211.76, stdev=28.05 00:09:53.633 lat (usec): min=158, max=612, avg=224.84, stdev=28.14 00:09:53.633 clat percentiles (usec): 00:09:53.633 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:09:53.633 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 219], 00:09:53.633 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 260], 00:09:53.633 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 474], 00:09:53.633 | 99.99th=[ 594] 00:09:53.633 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:53.633 slat (usec): min=13, max=134, avg=20.18, stdev= 5.28 00:09:53.633 clat (usec): min=96, max=1237, avg=158.60, stdev=32.76 00:09:53.633 lat (usec): min=123, max=1259, avg=178.79, stdev=33.34 00:09:53.633 clat percentiles (usec): 00:09:53.633 | 1.00th=[ 113], 5.00th=[ 122], 10.00th=[ 128], 20.00th=[ 137], 00:09:53.633 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 163], 00:09:53.633 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 202], 00:09:53.633 | 99.00th=[ 217], 99.50th=[ 233], 99.90th=[ 318], 99.95th=[ 416], 00:09:53.633 | 99.99th=[ 1237] 00:09:53.633 bw ( KiB/s): min=12008, max=12008, per=38.48%, avg=12008.00, stdev= 0.00, samples=1 00:09:53.633 iops : min= 3002, max= 3002, avg=3002.00, stdev= 0.00, samples=1 00:09:53.633 lat (usec) : 100=0.02%, 250=95.91%, 500=4.03%, 750=0.02% 00:09:53.633 lat (msec) : 2=0.02% 00:09:53.633 cpu : usr=2.10%, sys=6.30%, ctx=4938, majf=0, minf=13 00:09:53.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.633 issued rwts: total=2377,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.633 job3: (groupid=0, jobs=1): err= 0: pid=68040: Fri Jul 26 07:35:18 2024 00:09:53.633 read: IOPS=1092, BW=4372KiB/s (4477kB/s)(4376KiB/1001msec) 00:09:53.633 slat (nsec): min=17251, max=65927, avg=28537.89, stdev=6543.82 00:09:53.633 clat (usec): min=257, max=1002, avg=441.98, stdev=96.69 00:09:53.633 lat (usec): min=280, max=1042, avg=470.52, stdev=99.24 00:09:53.633 clat percentiles (usec): 00:09:53.633 | 1.00th=[ 277], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 383], 00:09:53.633 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 429], 00:09:53.633 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 611], 95.00th=[ 676], 00:09:53.633 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 922], 99.95th=[ 1004], 00:09:53.633 | 99.99th=[ 1004] 00:09:53.633 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:53.633 slat (nsec): min=25455, max=89427, avg=37649.34, stdev=7342.54 00:09:53.633 clat (usec): min=159, max=544, avg=273.13, stdev=62.48 00:09:53.633 lat (usec): min=194, max=630, avg=310.78, stdev=63.52 00:09:53.633 clat percentiles (usec): 00:09:53.633 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 212], 00:09:53.633 | 30.00th=[ 231], 40.00th=[ 247], 50.00th=[ 269], 60.00th=[ 285], 00:09:53.633 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 388], 00:09:53.633 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 545], 00:09:53.633 | 99.99th=[ 545] 00:09:53.633 bw ( KiB/s): min= 6392, max= 6392, per=20.48%, avg=6392.00, stdev= 0.00, samples=1 00:09:53.633 iops : min= 1598, max= 1598, avg=1598.00, stdev= 0.00, samples=1 00:09:53.633 lat (usec) : 250=24.33%, 500=69.81%, 750=5.32%, 1000=0.49% 00:09:53.633 lat (msec) : 2=0.04% 00:09:53.633 cpu : usr=2.30%, sys=6.60%, ctx=2644, majf=0, minf=7 00:09:53.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.633 issued rwts: total=1094,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.633 00:09:53.633 Run status group 0 (all jobs): 00:09:53.633 READ: bw=27.0MiB/s (28.4MB/s), 4092KiB/s-9730KiB/s (4190kB/s-9964kB/s), io=27.1MiB (28.4MB), run=1001-1001msec 00:09:53.633 WRITE: bw=30.5MiB/s (32.0MB/s), 4607KiB/s-9.99MiB/s (4718kB/s-10.5MB/s), io=30.5MiB (32.0MB), run=1001-1001msec 00:09:53.633 00:09:53.633 Disk stats (read/write): 00:09:53.633 nvme0n1: ios=932/1024, merge=0/0, ticks=456/396, in_queue=852, util=89.58% 00:09:53.633 nvme0n2: ios=2095/2336, merge=0/0, ticks=432/378, in_queue=810, util=88.98% 00:09:53.633 nvme0n3: ios=2048/2271, merge=0/0, ticks=432/374, in_queue=806, util=89.30% 00:09:53.633 nvme0n4: ios=1024/1209, merge=0/0, ticks=461/345, in_queue=806, util=89.86% 00:09:53.633 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:53.633 [global] 00:09:53.633 thread=1 00:09:53.633 invalidate=1 00:09:53.633 rw=write 00:09:53.633 time_based=1 00:09:53.633 runtime=1 00:09:53.633 ioengine=libaio 00:09:53.633 direct=1 00:09:53.633 bs=4096 00:09:53.633 iodepth=128 00:09:53.633 norandommap=0 00:09:53.633 numjobs=1 00:09:53.633 00:09:53.633 verify_dump=1 00:09:53.633 verify_backlog=512 00:09:53.633 verify_state_save=0 00:09:53.633 do_verify=1 00:09:53.633 verify=crc32c-intel 00:09:53.633 [job0] 00:09:53.633 filename=/dev/nvme0n1 00:09:53.633 [job1] 00:09:53.633 filename=/dev/nvme0n2 00:09:53.633 [job2] 00:09:53.633 filename=/dev/nvme0n3 00:09:53.633 [job3] 00:09:53.633 filename=/dev/nvme0n4 00:09:53.633 Could not set queue depth (nvme0n1) 00:09:53.633 Could not set queue depth (nvme0n2) 00:09:53.633 Could not set queue depth (nvme0n3) 00:09:53.633 Could not set queue depth (nvme0n4) 00:09:53.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.633 fio-3.35 00:09:53.633 Starting 4 threads 00:09:55.009 00:09:55.009 job0: (groupid=0, jobs=1): err= 0: pid=68099: Fri Jul 26 07:35:20 2024 00:09:55.009 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:09:55.009 slat (usec): min=6, max=13790, avg=314.62, stdev=1664.15 00:09:55.009 clat (usec): min=26442, max=44626, avg=40342.52, stdev=2331.97 00:09:55.009 lat (usec): min=35371, max=44643, avg=40657.14, stdev=1675.94 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[31065], 5.00th=[35390], 10.00th=[39060], 20.00th=[39584], 00:09:55.010 | 30.00th=[40109], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:09:55.010 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:09:55.010 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:09:55.010 | 99.99th=[44827] 00:09:55.010 write: IOPS=1656, BW=6627KiB/s (6786kB/s)(6660KiB/1005msec); 0 zone resets 00:09:55.010 slat (usec): min=15, max=12973, avg=303.92, stdev=1556.12 00:09:55.010 clat (usec): min=494, max=44405, avg=38542.79, stdev=5962.12 00:09:55.010 lat (usec): min=7884, max=44447, avg=38846.71, stdev=5759.01 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[ 8356], 5.00th=[28443], 10.00th=[34341], 20.00th=[38536], 00:09:55.010 | 30.00th=[39060], 40.00th=[39584], 50.00th=[39584], 60.00th=[40109], 00:09:55.010 | 70.00th=[40109], 80.00th=[41157], 90.00th=[43254], 95.00th=[43779], 00:09:55.010 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:09:55.010 | 99.99th=[44303] 00:09:55.010 bw ( KiB/s): min= 4608, max= 7703, per=12.88%, avg=6155.50, stdev=2188.50, samples=2 00:09:55.010 iops : min= 1152, max= 1925, avg=1538.50, stdev=546.59, samples=2 00:09:55.010 lat (usec) : 500=0.03% 00:09:55.010 lat (msec) : 10=1.00%, 20=1.00%, 50=97.97% 00:09:55.010 cpu : usr=1.99%, sys=4.78%, ctx=113, majf=0, minf=9 00:09:55.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:09:55.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.010 issued rwts: total=1536,1665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.010 job1: (groupid=0, jobs=1): err= 0: pid=68100: Fri Jul 26 07:35:20 2024 00:09:55.010 read: IOPS=4371, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1003msec) 00:09:55.010 slat (usec): min=7, max=3881, avg=107.53, stdev=514.12 00:09:55.010 clat (usec): min=364, max=16192, avg=14138.69, stdev=1393.71 00:09:55.010 lat (usec): min=3655, max=16216, avg=14246.22, stdev=1299.49 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[ 7046], 5.00th=[12125], 10.00th=[13304], 20.00th=[13698], 00:09:55.010 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:09:55.010 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15664], 00:09:55.010 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:09:55.010 | 99.99th=[16188] 00:09:55.010 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:55.010 slat (usec): min=12, max=3690, avg=106.73, stdev=462.39 00:09:55.010 clat (usec): min=9817, max=16558, avg=13996.42, stdev=886.11 00:09:55.010 lat (usec): min=12236, max=16580, avg=14103.14, stdev=761.15 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[11076], 5.00th=[12780], 10.00th=[13173], 20.00th=[13435], 00:09:55.010 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:09:55.010 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:09:55.010 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16581], 99.95th=[16581], 00:09:55.010 | 99.99th=[16581] 00:09:55.010 bw ( KiB/s): min=17963, max=18936, per=38.60%, avg=18449.50, stdev=688.01, samples=2 00:09:55.010 iops : min= 4490, max= 4734, avg=4612.00, stdev=172.53, samples=2 00:09:55.010 lat (usec) : 500=0.01% 00:09:55.010 lat (msec) : 4=0.21%, 10=0.51%, 20=99.27% 00:09:55.010 cpu : usr=4.69%, sys=12.97%, ctx=283, majf=0, minf=9 00:09:55.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:55.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.010 issued rwts: total=4385,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.010 job2: (groupid=0, jobs=1): err= 0: pid=68101: Fri Jul 26 07:35:20 2024 00:09:55.010 read: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec) 00:09:55.010 slat (usec): min=5, max=7515, avg=127.69, stdev=599.99 00:09:55.010 clat (usec): min=1168, max=23567, avg=16245.22, stdev=2032.64 00:09:55.010 lat (usec): min=5614, max=24230, avg=16372.91, stdev=2049.04 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[ 6390], 5.00th=[13566], 10.00th=[14484], 20.00th=[15270], 00:09:55.010 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:09:55.010 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[19268], 00:09:55.010 | 99.00th=[21103], 99.50th=[22676], 99.90th=[23462], 99.95th=[23462], 00:09:55.010 | 99.99th=[23462] 00:09:55.010 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:55.010 slat (usec): min=13, max=7527, avg=117.06, stdev=685.52 00:09:55.010 clat (usec): min=7572, max=26146, avg=15794.05, stdev=1891.11 00:09:55.010 lat (usec): min=7593, max=26192, avg=15911.12, stdev=1995.78 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[11076], 5.00th=[13042], 10.00th=[13960], 20.00th=[14484], 00:09:55.010 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:09:55.010 | 70.00th=[16450], 80.00th=[17171], 90.00th=[17695], 95.00th=[19006], 00:09:55.010 | 99.00th=[21890], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:09:55.010 | 99.99th=[26084] 00:09:55.010 bw ( KiB/s): min=16384, max=16384, per=34.28%, avg=16384.00, stdev= 0.00, samples=2 00:09:55.010 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:55.010 lat (msec) : 2=0.01%, 10=1.16%, 20=95.55%, 50=3.28% 00:09:55.010 cpu : usr=3.19%, sys=12.35%, ctx=293, majf=0, minf=7 00:09:55.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:55.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.010 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.010 job3: (groupid=0, jobs=1): err= 0: pid=68102: Fri Jul 26 07:35:20 2024 00:09:55.010 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:09:55.010 slat (usec): min=12, max=10564, avg=311.90, stdev=1642.56 00:09:55.010 clat (usec): min=29770, max=43371, avg=40580.10, stdev=1932.68 00:09:55.010 lat (usec): min=38524, max=43388, avg=40892.00, stdev=1027.88 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[31065], 5.00th=[39060], 10.00th=[39060], 20.00th=[39584], 00:09:55.010 | 30.00th=[40109], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:09:55.010 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:09:55.010 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:55.010 | 99.99th=[43254] 00:09:55.010 write: IOPS=1653, BW=6614KiB/s (6772kB/s)(6660KiB/1007msec); 0 zone resets 00:09:55.010 slat (usec): min=15, max=13424, avg=306.24, stdev=1561.44 00:09:55.010 clat (usec): min=2184, max=45364, avg=37866.13, stdev=5470.24 00:09:55.010 lat (usec): min=9486, max=45393, avg=38172.38, stdev=5253.75 00:09:55.010 clat percentiles (usec): 00:09:55.010 | 1.00th=[ 9896], 5.00th=[30016], 10.00th=[33817], 20.00th=[38011], 00:09:55.010 | 30.00th=[39060], 40.00th=[39060], 50.00th=[39584], 60.00th=[39584], 00:09:55.010 | 70.00th=[40109], 80.00th=[40109], 90.00th=[40633], 95.00th=[41157], 00:09:55.010 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:09:55.010 | 99.99th=[45351] 00:09:55.010 bw ( KiB/s): min= 4608, max= 7703, per=12.88%, avg=6155.50, stdev=2188.50, samples=2 00:09:55.010 iops : min= 1152, max= 1925, avg=1538.50, stdev=546.59, samples=2 00:09:55.010 lat (msec) : 4=0.03%, 10=0.53%, 20=1.12%, 50=98.31% 00:09:55.010 cpu : usr=1.79%, sys=5.17%, ctx=103, majf=0, minf=14 00:09:55.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:09:55.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.010 issued rwts: total=1536,1665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.010 00:09:55.010 Run status group 0 (all jobs): 00:09:55.010 READ: bw=43.8MiB/s (45.9MB/s), 6101KiB/s-17.1MiB/s (6248kB/s-17.9MB/s), io=44.1MiB (46.3MB), run=1003-1007msec 00:09:55.010 WRITE: bw=46.7MiB/s (48.9MB/s), 6614KiB/s-17.9MiB/s (6772kB/s-18.8MB/s), io=47.0MiB (49.3MB), run=1003-1007msec 00:09:55.010 00:09:55.010 Disk stats (read/write): 00:09:55.010 nvme0n1: ios=1266/1536, merge=0/0, ticks=12191/14119, in_queue=26310, util=89.98% 00:09:55.010 nvme0n2: ios=3761/4096, merge=0/0, ticks=11813/12381, in_queue=24194, util=89.91% 00:09:55.010 nvme0n3: ios=3282/3584, merge=0/0, ticks=25960/24472, in_queue=50432, util=90.07% 00:09:55.010 nvme0n4: ios=1222/1536, merge=0/0, ticks=11921/14161, in_queue=26082, util=89.83% 00:09:55.010 07:35:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:55.010 [global] 00:09:55.010 thread=1 00:09:55.010 invalidate=1 00:09:55.010 rw=randwrite 00:09:55.010 time_based=1 00:09:55.010 runtime=1 00:09:55.010 ioengine=libaio 00:09:55.010 direct=1 00:09:55.010 bs=4096 00:09:55.010 iodepth=128 00:09:55.010 norandommap=0 00:09:55.010 numjobs=1 00:09:55.010 00:09:55.010 verify_dump=1 00:09:55.010 verify_backlog=512 00:09:55.010 verify_state_save=0 00:09:55.010 do_verify=1 00:09:55.010 verify=crc32c-intel 00:09:55.010 [job0] 00:09:55.010 filename=/dev/nvme0n1 00:09:55.010 [job1] 00:09:55.010 filename=/dev/nvme0n2 00:09:55.010 [job2] 00:09:55.010 filename=/dev/nvme0n3 00:09:55.010 [job3] 00:09:55.010 filename=/dev/nvme0n4 00:09:55.010 Could not set queue depth (nvme0n1) 00:09:55.010 Could not set queue depth (nvme0n2) 00:09:55.010 Could not set queue depth (nvme0n3) 00:09:55.010 Could not set queue depth (nvme0n4) 00:09:55.010 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.010 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.010 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.011 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.011 fio-3.35 00:09:55.011 Starting 4 threads 00:09:56.410 00:09:56.410 job0: (groupid=0, jobs=1): err= 0: pid=68155: Fri Jul 26 07:35:21 2024 00:09:56.410 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:09:56.410 slat (usec): min=7, max=24227, avg=126.52, stdev=930.04 00:09:56.410 clat (usec): min=10175, max=51678, avg=17788.27, stdev=5587.02 00:09:56.410 lat (usec): min=10192, max=51712, avg=17914.79, stdev=5653.00 00:09:56.410 clat percentiles (usec): 00:09:56.410 | 1.00th=[10421], 5.00th=[13435], 10.00th=[13698], 20.00th=[13829], 00:09:56.410 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14877], 60.00th=[19530], 00:09:56.410 | 70.00th=[20055], 80.00th=[20841], 90.00th=[22152], 95.00th=[29230], 00:09:56.410 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:09:56.410 | 99.99th=[51643] 00:09:56.410 write: IOPS=4433, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1004msec); 0 zone resets 00:09:56.410 slat (usec): min=11, max=13196, avg=100.56, stdev=624.13 00:09:56.410 clat (usec): min=3811, max=33976, avg=12220.50, stdev=3410.99 00:09:56.410 lat (usec): min=3832, max=34023, avg=12321.06, stdev=3388.05 00:09:56.410 clat percentiles (usec): 00:09:56.410 | 1.00th=[ 6390], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10159], 00:09:56.410 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11338], 00:09:56.410 | 70.00th=[11994], 80.00th=[14091], 90.00th=[17433], 95.00th=[20055], 00:09:56.410 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:09:56.410 | 99.99th=[33817] 00:09:56.410 bw ( KiB/s): min=16351, max=18208, per=33.65%, avg=17279.50, stdev=1313.10, samples=2 00:09:56.411 iops : min= 4087, max= 4552, avg=4319.50, stdev=328.80, samples=2 00:09:56.411 lat (msec) : 4=0.09%, 10=7.75%, 20=74.10%, 50=18.05%, 100=0.01% 00:09:56.411 cpu : usr=3.69%, sys=11.67%, ctx=175, majf=0, minf=7 00:09:56.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:56.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.411 issued rwts: total=4096,4451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.411 job1: (groupid=0, jobs=1): err= 0: pid=68156: Fri Jul 26 07:35:21 2024 00:09:56.411 read: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec) 00:09:56.411 slat (usec): min=9, max=18207, avg=398.58, stdev=1629.83 00:09:56.411 clat (usec): min=33958, max=73726, avg=50284.84, stdev=10197.53 00:09:56.411 lat (usec): min=33985, max=73765, avg=50683.42, stdev=10283.55 00:09:56.411 clat percentiles (usec): 00:09:56.411 | 1.00th=[34866], 5.00th=[38011], 10.00th=[39584], 20.00th=[39584], 00:09:56.411 | 30.00th=[41157], 40.00th=[44827], 50.00th=[49546], 60.00th=[53740], 00:09:56.411 | 70.00th=[56361], 80.00th=[60031], 90.00th=[67634], 95.00th=[67634], 00:09:56.411 | 99.00th=[68682], 99.50th=[68682], 99.90th=[73925], 99.95th=[73925], 00:09:56.411 | 99.99th=[73925] 00:09:56.411 write: IOPS=1196, BW=4784KiB/s (4899kB/s)(4832KiB/1010msec); 0 zone resets 00:09:56.411 slat (usec): min=8, max=19623, avg=482.36, stdev=1703.42 00:09:56.411 clat (msec): min=9, max=112, avg=61.24, stdev=29.97 00:09:56.411 lat (msec): min=9, max=113, avg=61.72, stdev=30.14 00:09:56.411 clat percentiles (msec): 00:09:56.411 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 29], 00:09:56.411 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 56], 60.00th=[ 74], 00:09:56.411 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 100], 95.00th=[ 101], 00:09:56.411 | 99.00th=[ 102], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 112], 00:09:56.411 | 99.99th=[ 112] 00:09:56.411 bw ( KiB/s): min= 3280, max= 5368, per=8.42%, avg=4324.00, stdev=1476.44, samples=2 00:09:56.411 iops : min= 820, max= 1342, avg=1081.00, stdev=369.11, samples=2 00:09:56.411 lat (msec) : 10=0.27%, 20=0.76%, 50=48.16%, 100=47.22%, 250=3.58% 00:09:56.411 cpu : usr=0.89%, sys=3.67%, ctx=349, majf=0, minf=15 00:09:56.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:09:56.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.411 issued rwts: total=1024,1208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.411 job2: (groupid=0, jobs=1): err= 0: pid=68157: Fri Jul 26 07:35:21 2024 00:09:56.411 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:09:56.411 slat (usec): min=9, max=5433, avg=80.80, stdev=491.99 00:09:56.411 clat (usec): min=6730, max=18625, avg=11393.25, stdev=1228.06 00:09:56.411 lat (usec): min=6743, max=22201, avg=11474.05, stdev=1253.96 00:09:56.411 clat percentiles (usec): 00:09:56.411 | 1.00th=[ 7373], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:09:56.411 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:56.411 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:09:56.411 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[18744], 00:09:56.411 | 99.99th=[18744] 00:09:56.411 write: IOPS=6048, BW=23.6MiB/s (24.8MB/s)(23.7MiB/1005msec); 0 zone resets 00:09:56.411 slat (usec): min=10, max=7177, avg=83.13, stdev=473.17 00:09:56.411 clat (usec): min=467, max=14538, avg=10386.68, stdev=1109.42 00:09:56.411 lat (usec): min=4240, max=14564, avg=10469.82, stdev=1023.72 00:09:56.411 clat percentiles (usec): 00:09:56.411 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:09:56.411 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:09:56.411 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11600], 00:09:56.411 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:09:56.411 | 99.99th=[14484] 00:09:56.411 bw ( KiB/s): min=23032, max=24576, per=46.35%, avg=23804.00, stdev=1091.77, samples=2 00:09:56.411 iops : min= 5758, max= 6144, avg=5951.00, stdev=272.94, samples=2 00:09:56.411 lat (usec) : 500=0.01% 00:09:56.411 lat (msec) : 10=16.58%, 20=83.41% 00:09:56.411 cpu : usr=4.28%, sys=15.74%, ctx=249, majf=0, minf=11 00:09:56.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:56.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.411 issued rwts: total=5632,6079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.411 job3: (groupid=0, jobs=1): err= 0: pid=68158: Fri Jul 26 07:35:21 2024 00:09:56.411 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec) 00:09:56.411 slat (usec): min=7, max=16266, avg=400.83, stdev=1656.29 00:09:56.411 clat (usec): min=31087, max=82435, avg=49283.71, stdev=10508.40 00:09:56.411 lat (usec): min=33584, max=82476, avg=49684.54, stdev=10601.51 00:09:56.411 clat percentiles (usec): 00:09:56.411 | 1.00th=[34866], 5.00th=[35914], 10.00th=[38011], 20.00th=[39584], 00:09:56.411 | 30.00th=[40109], 40.00th=[42206], 50.00th=[47449], 60.00th=[51119], 00:09:56.411 | 70.00th=[56361], 80.00th=[58459], 90.00th=[66323], 95.00th=[67634], 00:09:56.411 | 99.00th=[68682], 99.50th=[72877], 99.90th=[77071], 99.95th=[82314], 00:09:56.411 | 99.99th=[82314] 00:09:56.411 write: IOPS=1262, BW=5049KiB/s (5170kB/s)(5120KiB/1014msec); 0 zone resets 00:09:56.411 slat (usec): min=9, max=28082, avg=454.88, stdev=1993.02 00:09:56.411 clat (msec): min=10, max=122, avg=60.29, stdev=29.34 00:09:56.411 lat (msec): min=16, max=123, avg=60.75, stdev=29.55 00:09:56.411 clat percentiles (msec): 00:09:56.411 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 31], 00:09:56.411 | 30.00th=[ 38], 40.00th=[ 41], 50.00th=[ 54], 60.00th=[ 72], 00:09:56.411 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 100], 95.00th=[ 101], 00:09:56.411 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 124], 00:09:56.411 | 99.99th=[ 124] 00:09:56.411 bw ( KiB/s): min= 3560, max= 5656, per=8.97%, avg=4608.00, stdev=1482.10, samples=2 00:09:56.411 iops : min= 890, max= 1414, avg=1152.00, stdev=370.52, samples=2 00:09:56.411 lat (msec) : 20=0.35%, 50=51.74%, 100=43.58%, 250=4.34% 00:09:56.411 cpu : usr=1.78%, sys=3.06%, ctx=335, majf=0, minf=13 00:09:56.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:09:56.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.411 issued rwts: total=1024,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.411 00:09:56.411 Run status group 0 (all jobs): 00:09:56.411 READ: bw=45.4MiB/s (47.6MB/s), 4039KiB/s-21.9MiB/s (4136kB/s-23.0MB/s), io=46.0MiB (48.2MB), run=1004-1014msec 00:09:56.411 WRITE: bw=50.1MiB/s (52.6MB/s), 4784KiB/s-23.6MiB/s (4899kB/s-24.8MB/s), io=50.9MiB (53.3MB), run=1004-1014msec 00:09:56.411 00:09:56.411 Disk stats (read/write): 00:09:56.411 nvme0n1: ios=3500/3584, merge=0/0, ticks=60012/40879, in_queue=100891, util=87.17% 00:09:56.411 nvme0n2: ios=885/1024, merge=0/0, ticks=20373/30784, in_queue=51157, util=85.01% 00:09:56.411 nvme0n3: ios=4734/5120, merge=0/0, ticks=50553/48856, in_queue=99409, util=89.04% 00:09:56.411 nvme0n4: ios=897/1024, merge=0/0, ticks=22144/30220, in_queue=52364, util=89.50% 00:09:56.411 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:56.411 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:56.411 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68177 00:09:56.411 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:56.411 [global] 00:09:56.411 thread=1 00:09:56.411 invalidate=1 00:09:56.411 rw=read 00:09:56.411 time_based=1 00:09:56.411 runtime=10 00:09:56.411 ioengine=libaio 00:09:56.411 direct=1 00:09:56.411 bs=4096 00:09:56.411 iodepth=1 00:09:56.411 norandommap=1 00:09:56.411 numjobs=1 00:09:56.411 00:09:56.411 [job0] 00:09:56.411 filename=/dev/nvme0n1 00:09:56.411 [job1] 00:09:56.411 filename=/dev/nvme0n2 00:09:56.411 [job2] 00:09:56.411 filename=/dev/nvme0n3 00:09:56.411 [job3] 00:09:56.411 filename=/dev/nvme0n4 00:09:56.411 Could not set queue depth (nvme0n1) 00:09:56.411 Could not set queue depth (nvme0n2) 00:09:56.411 Could not set queue depth (nvme0n3) 00:09:56.411 Could not set queue depth (nvme0n4) 00:09:56.411 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.411 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.411 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.411 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.411 fio-3.35 00:09:56.411 Starting 4 threads 00:09:59.691 07:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:59.691 fio: pid=68221, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:59.691 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=41345024, buflen=4096 00:09:59.691 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:59.949 fio: pid=68220, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:59.949 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=46968832, buflen=4096 00:09:59.949 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.949 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:00.207 fio: pid=68218, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:00.207 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=51310592, buflen=4096 00:10:00.207 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.207 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:00.465 fio: pid=68219, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:00.465 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=4681728, buflen=4096 00:10:00.465 00:10:00.465 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68218: Fri Jul 26 07:35:25 2024 00:10:00.465 read: IOPS=3567, BW=13.9MiB/s (14.6MB/s)(48.9MiB/3512msec) 00:10:00.465 slat (usec): min=10, max=15267, avg=18.77, stdev=226.32 00:10:00.465 clat (usec): min=140, max=2480, avg=260.20, stdev=55.53 00:10:00.465 lat (usec): min=153, max=15462, avg=278.97, stdev=232.20 00:10:00.465 clat percentiles (usec): 00:10:00.465 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 237], 00:10:00.465 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:10:00.465 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 326], 00:10:00.465 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 441], 99.95th=[ 486], 00:10:00.465 | 99.99th=[ 2278] 00:10:00.465 bw ( KiB/s): min=12656, max=14440, per=25.04%, avg=13688.00, stdev=607.07, samples=6 00:10:00.465 iops : min= 3164, max= 3610, avg=3422.00, stdev=151.77, samples=6 00:10:00.465 lat (usec) : 250=32.77%, 500=67.18%, 750=0.01% 00:10:00.465 lat (msec) : 2=0.02%, 4=0.02% 00:10:00.465 cpu : usr=1.22%, sys=4.67%, ctx=12532, majf=0, minf=1 00:10:00.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.465 issued rwts: total=12528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.465 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68219: Fri Jul 26 07:35:25 2024 00:10:00.465 read: IOPS=4640, BW=18.1MiB/s (19.0MB/s)(68.5MiB/3777msec) 00:10:00.465 slat (usec): min=11, max=14714, avg=17.08, stdev=192.94 00:10:00.465 clat (usec): min=132, max=2380, avg=196.96, stdev=53.45 00:10:00.465 lat (usec): min=145, max=15001, avg=214.04, stdev=201.09 00:10:00.465 clat percentiles (usec): 00:10:00.465 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:10:00.465 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:10:00.465 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 235], 95.00th=[ 251], 00:10:00.465 | 99.00th=[ 285], 99.50th=[ 343], 99.90th=[ 840], 99.95th=[ 1188], 00:10:00.465 | 99.99th=[ 2089] 00:10:00.465 bw ( KiB/s): min=17048, max=19280, per=33.82%, avg=18488.57, stdev=1020.41, samples=7 00:10:00.465 iops : min= 4262, max= 4820, avg=4622.14, stdev=255.10, samples=7 00:10:00.465 lat (usec) : 250=94.88%, 500=4.92%, 750=0.07%, 1000=0.06% 00:10:00.465 lat (msec) : 2=0.04%, 4=0.02% 00:10:00.465 cpu : usr=1.46%, sys=5.59%, ctx=17538, majf=0, minf=1 00:10:00.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.465 issued rwts: total=17528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.465 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68220: Fri Jul 26 07:35:25 2024 00:10:00.465 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(44.8MiB/3241msec) 00:10:00.465 slat (usec): min=12, max=8785, avg=16.05, stdev=99.85 00:10:00.465 clat (usec): min=175, max=3334, avg=265.12, stdev=44.81 00:10:00.465 lat (usec): min=190, max=9053, avg=281.18, stdev=109.43 00:10:00.465 clat percentiles (usec): 00:10:00.465 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:10:00.465 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:10:00.465 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:10:00.465 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 437], 99.95th=[ 652], 00:10:00.465 | 99.99th=[ 2343] 00:10:00.465 bw ( KiB/s): min=13992, max=14864, per=25.95%, avg=14184.00, stdev=342.83, samples=6 00:10:00.465 iops : min= 3498, max= 3716, avg=3546.00, stdev=85.71, samples=6 00:10:00.465 lat (usec) : 250=29.03%, 500=70.87%, 750=0.06% 00:10:00.465 lat (msec) : 2=0.02%, 4=0.02% 00:10:00.465 cpu : usr=0.96%, sys=4.69%, ctx=11471, majf=0, minf=1 00:10:00.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.466 issued rwts: total=11468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.466 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68221: Fri Jul 26 07:35:25 2024 00:10:00.466 read: IOPS=3415, BW=13.3MiB/s (14.0MB/s)(39.4MiB/2956msec) 00:10:00.466 slat (nsec): min=8183, max=81843, avg=10875.57, stdev=3166.15 00:10:00.466 clat (usec): min=156, max=7638, avg=280.77, stdev=93.36 00:10:00.466 lat (usec): min=179, max=7653, avg=291.64, stdev=93.40 00:10:00.466 clat percentiles (usec): 00:10:00.466 | 1.00th=[ 206], 5.00th=[ 235], 10.00th=[ 245], 20.00th=[ 255], 00:10:00.466 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:10:00.466 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 334], 00:10:00.466 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 523], 99.95th=[ 1565], 00:10:00.466 | 99.99th=[ 3392] 00:10:00.466 bw ( KiB/s): min=12656, max=13960, per=24.97%, avg=13651.20, stdev=560.02, samples=5 00:10:00.466 iops : min= 3164, max= 3490, avg=3412.80, stdev=140.00, samples=5 00:10:00.466 lat (usec) : 250=15.00%, 500=84.87%, 750=0.05%, 1000=0.01% 00:10:00.466 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:10:00.466 cpu : usr=0.61%, sys=3.55%, ctx=10095, majf=0, minf=1 00:10:00.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.466 issued rwts: total=10095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.466 00:10:00.466 Run status group 0 (all jobs): 00:10:00.466 READ: bw=53.4MiB/s (56.0MB/s), 13.3MiB/s-18.1MiB/s (14.0MB/s-19.0MB/s), io=202MiB (211MB), run=2956-3777msec 00:10:00.466 00:10:00.466 Disk stats (read/write): 00:10:00.466 nvme0n1: ios=11747/0, merge=0/0, ticks=3174/0, in_queue=3174, util=94.99% 00:10:00.466 nvme0n2: ios=16655/0, merge=0/0, ticks=3340/0, in_queue=3340, util=95.26% 00:10:00.466 nvme0n3: ios=10996/0, merge=0/0, ticks=2975/0, in_queue=2975, util=96.36% 00:10:00.466 nvme0n4: ios=9796/0, merge=0/0, ticks=2608/0, in_queue=2608, util=96.49% 00:10:00.466 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.466 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:00.724 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.724 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:00.983 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.983 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:01.242 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.242 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:01.501 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.501 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 68177 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.759 nvmf hotplug test: fio failed as expected 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:01.759 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.017 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.017 rmmod nvme_tcp 00:10:02.017 rmmod nvme_fabrics 00:10:02.275 rmmod nvme_keyring 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67788 ']' 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67788 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67788 ']' 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67788 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67788 00:10:02.275 killing process with pid 67788 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67788' 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67788 00:10:02.275 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67788 00:10:02.533 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.533 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:02.534 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:02.534 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.534 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.534 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.534 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.534 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:02.534 ************************************ 00:10:02.534 END TEST nvmf_fio_target 00:10:02.534 ************************************ 00:10:02.534 00:10:02.534 real 0m19.695s 00:10:02.534 user 1m14.989s 00:10:02.534 sys 0m9.343s 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.534 ************************************ 00:10:02.534 START TEST nvmf_bdevio 00:10:02.534 ************************************ 00:10:02.534 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:02.792 * Looking for test storage... 00:10:02.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.792 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:02.793 Cannot find device "nvmf_tgt_br" 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:02.793 Cannot find device "nvmf_tgt_br2" 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:02.793 Cannot find device "nvmf_tgt_br" 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:02.793 Cannot find device "nvmf_tgt_br2" 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:02.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:02.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.793 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.052 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:03.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:03.053 00:10:03.053 --- 10.0.0.2 ping statistics --- 00:10:03.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.053 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:03.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:03.053 00:10:03.053 --- 10.0.0.3 ping statistics --- 00:10:03.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.053 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:03.053 00:10:03.053 --- 10.0.0.1 ping statistics --- 00:10:03.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.053 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68483 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68483 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68483 ']' 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.053 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.053 [2024-07-26 07:35:28.592331] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:10:03.053 [2024-07-26 07:35:28.592632] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.311 [2024-07-26 07:35:28.726400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.311 [2024-07-26 07:35:28.871686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.311 [2024-07-26 07:35:28.871938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.311 [2024-07-26 07:35:28.872071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.311 [2024-07-26 07:35:28.872207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.311 [2024-07-26 07:35:28.872242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.311 [2024-07-26 07:35:28.872498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.311 [2024-07-26 07:35:28.872835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.311 [2024-07-26 07:35:28.875070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.311 [2024-07-26 07:35:28.875119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.569 [2024-07-26 07:35:28.947835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 [2024-07-26 07:35:29.644558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 Malloc0 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 [2024-07-26 07:35:29.727662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:04.135 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:04.135 { 00:10:04.135 "params": { 00:10:04.135 "name": "Nvme$subsystem", 00:10:04.135 "trtype": "$TEST_TRANSPORT", 00:10:04.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.135 "adrfam": "ipv4", 00:10:04.135 "trsvcid": "$NVMF_PORT", 00:10:04.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.135 "hdgst": ${hdgst:-false}, 00:10:04.135 "ddgst": ${ddgst:-false} 00:10:04.135 }, 00:10:04.135 "method": "bdev_nvme_attach_controller" 00:10:04.135 } 00:10:04.135 EOF 00:10:04.135 )") 00:10:04.394 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:04.394 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:04.394 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:04.394 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:04.394 "params": { 00:10:04.394 "name": "Nvme1", 00:10:04.394 "trtype": "tcp", 00:10:04.394 "traddr": "10.0.0.2", 00:10:04.394 "adrfam": "ipv4", 00:10:04.394 "trsvcid": "4420", 00:10:04.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.394 "hdgst": false, 00:10:04.394 "ddgst": false 00:10:04.394 }, 00:10:04.394 "method": "bdev_nvme_attach_controller" 00:10:04.394 }' 00:10:04.394 [2024-07-26 07:35:29.787923] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:10:04.394 [2024-07-26 07:35:29.788009] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68519 ] 00:10:04.394 [2024-07-26 07:35:29.932343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.652 [2024-07-26 07:35:30.053849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.652 [2024-07-26 07:35:30.053989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.652 [2024-07-26 07:35:30.054361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.652 [2024-07-26 07:35:30.161680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.910 I/O targets: 00:10:04.910 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:04.910 00:10:04.910 00:10:04.910 CUnit - A unit testing framework for C - Version 2.1-3 00:10:04.910 http://cunit.sourceforge.net/ 00:10:04.910 00:10:04.910 00:10:04.910 Suite: bdevio tests on: Nvme1n1 00:10:04.910 Test: blockdev write read block ...passed 00:10:04.910 Test: blockdev write zeroes read block ...passed 00:10:04.910 Test: blockdev write zeroes read no split ...passed 00:10:04.910 Test: blockdev write zeroes read split ...passed 00:10:04.910 Test: blockdev write zeroes read split partial ...passed 00:10:04.910 Test: blockdev reset ...[2024-07-26 07:35:30.329505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:04.910 [2024-07-26 07:35:30.330091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c7c0 (9): Bad file descriptor 00:10:04.910 [2024-07-26 07:35:30.343939] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:04.910 passed 00:10:04.910 Test: blockdev write read 8 blocks ...passed 00:10:04.910 Test: blockdev write read size > 128k ...passed 00:10:04.910 Test: blockdev write read invalid size ...passed 00:10:04.910 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:04.910 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:04.910 Test: blockdev write read max offset ...passed 00:10:04.910 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:04.910 Test: blockdev writev readv 8 blocks ...passed 00:10:04.910 Test: blockdev writev readv 30 x 1block ...passed 00:10:04.910 Test: blockdev writev readv block ...passed 00:10:04.910 Test: blockdev writev readv size > 128k ...passed 00:10:04.910 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:04.910 Test: blockdev comparev and writev ...[2024-07-26 07:35:30.354647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.354970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.355007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.355022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.355303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.355328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.355347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.355358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.355642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.355665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.355684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.355694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.356023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.356049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:04.910 [2024-07-26 07:35:30.356136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.910 [2024-07-26 07:35:30.356153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:04.910 passed 00:10:04.910 Test: blockdev nvme passthru rw ...passed 00:10:04.910 Test: blockdev nvme passthru vendor specific ...[2024-07-26 07:35:30.357810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.911 [2024-07-26 07:35:30.357967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:04.911 [2024-07-26 07:35:30.358165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.911 [2024-07-26 07:35:30.358185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:04.911 [2024-07-26 07:35:30.358289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.911 [2024-07-26 07:35:30.358306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:04.911 [2024-07-26 07:35:30.358421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.911 [2024-07-26 07:35:30.358438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:04.911 passed 00:10:04.911 Test: blockdev nvme admin passthru ...passed 00:10:04.911 Test: blockdev copy ...passed 00:10:04.911 00:10:04.911 Run Summary: Type Total Ran Passed Failed Inactive 00:10:04.911 suites 1 1 n/a 0 0 00:10:04.911 tests 23 23 23 0 0 00:10:04.911 asserts 152 152 152 0 n/a 00:10:04.911 00:10:04.911 Elapsed time = 0.146 seconds 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.168 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.168 rmmod nvme_tcp 00:10:05.168 rmmod nvme_fabrics 00:10:05.168 rmmod nvme_keyring 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68483 ']' 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68483 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68483 ']' 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68483 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68483 00:10:05.427 killing process with pid 68483 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68483' 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68483 00:10:05.427 07:35:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68483 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:05.686 00:10:05.686 real 0m3.116s 00:10:05.686 user 0m10.641s 00:10:05.686 sys 0m0.868s 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.686 ************************************ 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.686 END TEST nvmf_bdevio 00:10:05.686 ************************************ 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:05.686 00:10:05.686 real 2m35.092s 00:10:05.686 user 6m59.106s 00:10:05.686 sys 0m49.856s 00:10:05.686 ************************************ 00:10:05.686 END TEST nvmf_target_core 00:10:05.686 ************************************ 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.686 07:35:31 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:05.686 07:35:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.686 07:35:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.686 07:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.686 ************************************ 00:10:05.686 START TEST nvmf_target_extra 00:10:05.686 ************************************ 00:10:05.686 07:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:05.946 * Looking for test storage... 00:10:05.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:05.946 ************************************ 00:10:05.946 START TEST nvmf_auth_target 00:10:05.946 ************************************ 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:05.946 * Looking for test storage... 00:10:05.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.946 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:05.947 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:06.206 Cannot find device "nvmf_tgt_br" 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.206 Cannot find device "nvmf_tgt_br2" 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:06.206 Cannot find device "nvmf_tgt_br" 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:06.206 Cannot find device "nvmf_tgt_br2" 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:06.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:06.206 00:10:06.206 --- 10.0.0.2 ping statistics --- 00:10:06.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.206 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:06.206 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:06.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:06.206 00:10:06.207 --- 10.0.0.3 ping statistics --- 00:10:06.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.207 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:06.207 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:06.465 00:10:06.465 --- 10.0.0.1 ping statistics --- 00:10:06.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.465 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68743 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68743 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68743 ']' 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.466 07:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68775 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53a01c390ddab8e6cae3ef2e82ea863a03e82659f9148c1b 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:07.402 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Zj9 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53a01c390ddab8e6cae3ef2e82ea863a03e82659f9148c1b 0 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53a01c390ddab8e6cae3ef2e82ea863a03e82659f9148c1b 0 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53a01c390ddab8e6cae3ef2e82ea863a03e82659f9148c1b 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Zj9 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Zj9 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Zj9 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:07.403 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:07.403 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9ccb1ce02ac17a5e1153b7874e75759ae9f2916fe2e0110732aca76d9338b91 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Ccx 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9ccb1ce02ac17a5e1153b7874e75759ae9f2916fe2e0110732aca76d9338b91 3 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9ccb1ce02ac17a5e1153b7874e75759ae9f2916fe2e0110732aca76d9338b91 3 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9ccb1ce02ac17a5e1153b7874e75759ae9f2916fe2e0110732aca76d9338b91 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Ccx 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Ccx 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Ccx 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c6f484c4f1c168afdb008fc4c260f256 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CUt 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c6f484c4f1c168afdb008fc4c260f256 1 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c6f484c4f1c168afdb008fc4c260f256 1 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c6f484c4f1c168afdb008fc4c260f256 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CUt 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CUt 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CUt 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7c4c3e61cffeb07cf1a1e3a42c90f28d744a835f291f44c4 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HHN 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7c4c3e61cffeb07cf1a1e3a42c90f28d744a835f291f44c4 2 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7c4c3e61cffeb07cf1a1e3a42c90f28d744a835f291f44c4 2 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7c4c3e61cffeb07cf1a1e3a42c90f28d744a835f291f44c4 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HHN 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HHN 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.HHN 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.662 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d445bc76aa610fa790762819da0bd2421aae5d60d94980ab 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eZS 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d445bc76aa610fa790762819da0bd2421aae5d60d94980ab 2 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d445bc76aa610fa790762819da0bd2421aae5d60d94980ab 2 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d445bc76aa610fa790762819da0bd2421aae5d60d94980ab 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eZS 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eZS 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.eZS 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:07.663 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:07.920 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:07.920 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6b33b1e297cff56856078978dc732f97 00:10:07.920 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:07.920 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yiI 00:10:07.920 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6b33b1e297cff56856078978dc732f97 1 00:10:07.920 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6b33b1e297cff56856078978dc732f97 1 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6b33b1e297cff56856078978dc732f97 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yiI 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yiI 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.yiI 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1dfbea86b99e4f2f3b38d51a5feef3f175a13b6911f725e8cfa07f0eb6696803 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3rF 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1dfbea86b99e4f2f3b38d51a5feef3f175a13b6911f725e8cfa07f0eb6696803 3 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1dfbea86b99e4f2f3b38d51a5feef3f175a13b6911f725e8cfa07f0eb6696803 3 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1dfbea86b99e4f2f3b38d51a5feef3f175a13b6911f725e8cfa07f0eb6696803 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3rF 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3rF 00:10:07.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.3rF 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68743 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68743 ']' 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.921 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68775 /var/tmp/host.sock 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68775 ']' 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.179 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Zj9 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Zj9 00:10:08.438 07:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Zj9 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Ccx ]] 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ccx 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ccx 00:10:08.696 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ccx 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CUt 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CUt 00:10:08.955 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CUt 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.HHN ]] 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HHN 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HHN 00:10:09.214 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HHN 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eZS 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.eZS 00:10:09.473 07:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.eZS 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.yiI ]] 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yiI 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yiI 00:10:09.732 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yiI 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3rF 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.3rF 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.3rF 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.990 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:10.249 07:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:10.814 00:10:10.814 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:10.814 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:10.814 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:11.071 { 00:10:11.071 "cntlid": 1, 00:10:11.071 "qid": 0, 00:10:11.071 "state": "enabled", 00:10:11.071 "thread": "nvmf_tgt_poll_group_000", 00:10:11.071 "listen_address": { 00:10:11.071 "trtype": "TCP", 00:10:11.071 "adrfam": "IPv4", 00:10:11.071 "traddr": "10.0.0.2", 00:10:11.071 "trsvcid": "4420" 00:10:11.071 }, 00:10:11.071 "peer_address": { 00:10:11.071 "trtype": "TCP", 00:10:11.071 "adrfam": "IPv4", 00:10:11.071 "traddr": "10.0.0.1", 00:10:11.071 "trsvcid": "35072" 00:10:11.071 }, 00:10:11.071 "auth": { 00:10:11.071 "state": "completed", 00:10:11.071 "digest": "sha256", 00:10:11.071 "dhgroup": "null" 00:10:11.071 } 00:10:11.071 } 00:10:11.071 ]' 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.071 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.328 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:15.562 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.821 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:16.079 00:10:16.079 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:16.079 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.079 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.337 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.337 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.337 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.337 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.596 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.596 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:16.596 { 00:10:16.596 "cntlid": 3, 00:10:16.596 "qid": 0, 00:10:16.596 "state": "enabled", 00:10:16.596 "thread": "nvmf_tgt_poll_group_000", 00:10:16.596 "listen_address": { 00:10:16.596 "trtype": "TCP", 00:10:16.596 "adrfam": "IPv4", 00:10:16.596 "traddr": "10.0.0.2", 00:10:16.596 "trsvcid": "4420" 00:10:16.596 }, 00:10:16.596 "peer_address": { 00:10:16.596 "trtype": "TCP", 00:10:16.596 "adrfam": "IPv4", 00:10:16.596 "traddr": "10.0.0.1", 00:10:16.596 "trsvcid": "52064" 00:10:16.596 }, 00:10:16.596 "auth": { 00:10:16.596 "state": "completed", 00:10:16.596 "digest": "sha256", 00:10:16.596 "dhgroup": "null" 00:10:16.596 } 00:10:16.596 } 00:10:16.596 ]' 00:10:16.596 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:16.596 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.596 07:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:16.596 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:16.596 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.596 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.596 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.596 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.854 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:10:17.421 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.680 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.248 00:10:18.248 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.248 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.248 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.506 { 00:10:18.506 "cntlid": 5, 00:10:18.506 "qid": 0, 00:10:18.506 "state": "enabled", 00:10:18.506 "thread": "nvmf_tgt_poll_group_000", 00:10:18.506 "listen_address": { 00:10:18.506 "trtype": "TCP", 00:10:18.506 "adrfam": "IPv4", 00:10:18.506 "traddr": "10.0.0.2", 00:10:18.506 "trsvcid": "4420" 00:10:18.506 }, 00:10:18.506 "peer_address": { 00:10:18.506 "trtype": "TCP", 00:10:18.506 "adrfam": "IPv4", 00:10:18.506 "traddr": "10.0.0.1", 00:10:18.506 "trsvcid": "52106" 00:10:18.506 }, 00:10:18.506 "auth": { 00:10:18.506 "state": "completed", 00:10:18.506 "digest": "sha256", 00:10:18.506 "dhgroup": "null" 00:10:18.506 } 00:10:18.506 } 00:10:18.506 ]' 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:18.506 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.506 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.506 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.506 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.764 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:10:19.331 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.331 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:19.331 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.331 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.589 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.589 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:19.589 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:19.589 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:19.848 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:20.107 00:10:20.107 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.107 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.107 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:20.366 { 00:10:20.366 "cntlid": 7, 00:10:20.366 "qid": 0, 00:10:20.366 "state": "enabled", 00:10:20.366 "thread": "nvmf_tgt_poll_group_000", 00:10:20.366 "listen_address": { 00:10:20.366 "trtype": "TCP", 00:10:20.366 "adrfam": "IPv4", 00:10:20.366 "traddr": "10.0.0.2", 00:10:20.366 "trsvcid": "4420" 00:10:20.366 }, 00:10:20.366 "peer_address": { 00:10:20.366 "trtype": "TCP", 00:10:20.366 "adrfam": "IPv4", 00:10:20.366 "traddr": "10.0.0.1", 00:10:20.366 "trsvcid": "52124" 00:10:20.366 }, 00:10:20.366 "auth": { 00:10:20.366 "state": "completed", 00:10:20.366 "digest": "sha256", 00:10:20.366 "dhgroup": "null" 00:10:20.366 } 00:10:20.366 } 00:10:20.366 ]' 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.366 07:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.624 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.560 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.560 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.126 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.126 { 00:10:22.126 "cntlid": 9, 00:10:22.126 "qid": 0, 00:10:22.126 "state": "enabled", 00:10:22.126 "thread": "nvmf_tgt_poll_group_000", 00:10:22.126 "listen_address": { 00:10:22.126 "trtype": "TCP", 00:10:22.126 "adrfam": "IPv4", 00:10:22.126 "traddr": "10.0.0.2", 00:10:22.126 "trsvcid": "4420" 00:10:22.126 }, 00:10:22.126 "peer_address": { 00:10:22.126 "trtype": "TCP", 00:10:22.126 "adrfam": "IPv4", 00:10:22.126 "traddr": "10.0.0.1", 00:10:22.126 "trsvcid": "52146" 00:10:22.126 }, 00:10:22.126 "auth": { 00:10:22.126 "state": "completed", 00:10:22.126 "digest": "sha256", 00:10:22.126 "dhgroup": "ffdhe2048" 00:10:22.126 } 00:10:22.126 } 00:10:22.126 ]' 00:10:22.126 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.384 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.642 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.577 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.577 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.835 00:10:23.835 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:23.835 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.835 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.093 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.093 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.093 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.093 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.093 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.093 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.093 { 00:10:24.093 "cntlid": 11, 00:10:24.093 "qid": 0, 00:10:24.093 "state": "enabled", 00:10:24.093 "thread": "nvmf_tgt_poll_group_000", 00:10:24.093 "listen_address": { 00:10:24.093 "trtype": "TCP", 00:10:24.093 "adrfam": "IPv4", 00:10:24.093 "traddr": "10.0.0.2", 00:10:24.093 "trsvcid": "4420" 00:10:24.093 }, 00:10:24.093 "peer_address": { 00:10:24.093 "trtype": "TCP", 00:10:24.093 "adrfam": "IPv4", 00:10:24.093 "traddr": "10.0.0.1", 00:10:24.093 "trsvcid": "58584" 00:10:24.093 }, 00:10:24.093 "auth": { 00:10:24.093 "state": "completed", 00:10:24.093 "digest": "sha256", 00:10:24.093 "dhgroup": "ffdhe2048" 00:10:24.093 } 00:10:24.093 } 00:10:24.093 ]' 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.352 07:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.610 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:25.543 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.543 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.801 00:10:26.059 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.059 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.059 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:26.317 { 00:10:26.317 "cntlid": 13, 00:10:26.317 "qid": 0, 00:10:26.317 "state": "enabled", 00:10:26.317 "thread": "nvmf_tgt_poll_group_000", 00:10:26.317 "listen_address": { 00:10:26.317 "trtype": "TCP", 00:10:26.317 "adrfam": "IPv4", 00:10:26.317 "traddr": "10.0.0.2", 00:10:26.317 "trsvcid": "4420" 00:10:26.317 }, 00:10:26.317 "peer_address": { 00:10:26.317 "trtype": "TCP", 00:10:26.317 "adrfam": "IPv4", 00:10:26.317 "traddr": "10.0.0.1", 00:10:26.317 "trsvcid": "58614" 00:10:26.317 }, 00:10:26.317 "auth": { 00:10:26.317 "state": "completed", 00:10:26.317 "digest": "sha256", 00:10:26.317 "dhgroup": "ffdhe2048" 00:10:26.317 } 00:10:26.317 } 00:10:26.317 ]' 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.317 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.575 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:27.141 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:27.400 07:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:27.965 00:10:27.965 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.966 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.966 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.966 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.223 { 00:10:28.223 "cntlid": 15, 00:10:28.223 "qid": 0, 00:10:28.223 "state": "enabled", 00:10:28.223 "thread": "nvmf_tgt_poll_group_000", 00:10:28.223 "listen_address": { 00:10:28.223 "trtype": "TCP", 00:10:28.223 "adrfam": "IPv4", 00:10:28.223 "traddr": "10.0.0.2", 00:10:28.223 "trsvcid": "4420" 00:10:28.223 }, 00:10:28.223 "peer_address": { 00:10:28.223 "trtype": "TCP", 00:10:28.223 "adrfam": "IPv4", 00:10:28.223 "traddr": "10.0.0.1", 00:10:28.223 "trsvcid": "58638" 00:10:28.223 }, 00:10:28.223 "auth": { 00:10:28.223 "state": "completed", 00:10:28.223 "digest": "sha256", 00:10:28.223 "dhgroup": "ffdhe2048" 00:10:28.223 } 00:10:28.223 } 00:10:28.223 ]' 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.223 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.479 07:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.044 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.303 07:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.869 00:10:29.869 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.869 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.869 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.127 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.127 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.127 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.127 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.127 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.127 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.127 { 00:10:30.127 "cntlid": 17, 00:10:30.127 "qid": 0, 00:10:30.127 "state": "enabled", 00:10:30.127 "thread": "nvmf_tgt_poll_group_000", 00:10:30.127 "listen_address": { 00:10:30.127 "trtype": "TCP", 00:10:30.127 "adrfam": "IPv4", 00:10:30.127 "traddr": "10.0.0.2", 00:10:30.127 "trsvcid": "4420" 00:10:30.127 }, 00:10:30.127 "peer_address": { 00:10:30.127 "trtype": "TCP", 00:10:30.127 "adrfam": "IPv4", 00:10:30.127 "traddr": "10.0.0.1", 00:10:30.127 "trsvcid": "58668" 00:10:30.127 }, 00:10:30.127 "auth": { 00:10:30.127 "state": "completed", 00:10:30.127 "digest": "sha256", 00:10:30.127 "dhgroup": "ffdhe3072" 00:10:30.127 } 00:10:30.128 } 00:10:30.128 ]' 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.128 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.385 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.320 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.321 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.321 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.321 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.579 00:10:31.579 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:31.579 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:31.579 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.146 { 00:10:32.146 "cntlid": 19, 00:10:32.146 "qid": 0, 00:10:32.146 "state": "enabled", 00:10:32.146 "thread": "nvmf_tgt_poll_group_000", 00:10:32.146 "listen_address": { 00:10:32.146 "trtype": "TCP", 00:10:32.146 "adrfam": "IPv4", 00:10:32.146 "traddr": "10.0.0.2", 00:10:32.146 "trsvcid": "4420" 00:10:32.146 }, 00:10:32.146 "peer_address": { 00:10:32.146 "trtype": "TCP", 00:10:32.146 "adrfam": "IPv4", 00:10:32.146 "traddr": "10.0.0.1", 00:10:32.146 "trsvcid": "58692" 00:10:32.146 }, 00:10:32.146 "auth": { 00:10:32.146 "state": "completed", 00:10:32.146 "digest": "sha256", 00:10:32.146 "dhgroup": "ffdhe3072" 00:10:32.146 } 00:10:32.146 } 00:10:32.146 ]' 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.146 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.405 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:32.972 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.231 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.489 00:10:33.489 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.489 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.489 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.759 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.759 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.759 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.759 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.759 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.759 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:33.759 { 00:10:33.759 "cntlid": 21, 00:10:33.760 "qid": 0, 00:10:33.760 "state": "enabled", 00:10:33.760 "thread": "nvmf_tgt_poll_group_000", 00:10:33.760 "listen_address": { 00:10:33.760 "trtype": "TCP", 00:10:33.760 "adrfam": "IPv4", 00:10:33.760 "traddr": "10.0.0.2", 00:10:33.760 "trsvcid": "4420" 00:10:33.760 }, 00:10:33.760 "peer_address": { 00:10:33.760 "trtype": "TCP", 00:10:33.760 "adrfam": "IPv4", 00:10:33.760 "traddr": "10.0.0.1", 00:10:33.760 "trsvcid": "58710" 00:10:33.760 }, 00:10:33.760 "auth": { 00:10:33.760 "state": "completed", 00:10:33.760 "digest": "sha256", 00:10:33.760 "dhgroup": "ffdhe3072" 00:10:33.760 } 00:10:33.760 } 00:10:33.760 ]' 00:10:33.760 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:33.760 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.760 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.030 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.030 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.030 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.030 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.030 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.288 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:34.856 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:35.115 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:35.373 00:10:35.631 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.631 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.631 07:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.890 { 00:10:35.890 "cntlid": 23, 00:10:35.890 "qid": 0, 00:10:35.890 "state": "enabled", 00:10:35.890 "thread": "nvmf_tgt_poll_group_000", 00:10:35.890 "listen_address": { 00:10:35.890 "trtype": "TCP", 00:10:35.890 "adrfam": "IPv4", 00:10:35.890 "traddr": "10.0.0.2", 00:10:35.890 "trsvcid": "4420" 00:10:35.890 }, 00:10:35.890 "peer_address": { 00:10:35.890 "trtype": "TCP", 00:10:35.890 "adrfam": "IPv4", 00:10:35.890 "traddr": "10.0.0.1", 00:10:35.890 "trsvcid": "55854" 00:10:35.890 }, 00:10:35.890 "auth": { 00:10:35.890 "state": "completed", 00:10:35.890 "digest": "sha256", 00:10:35.890 "dhgroup": "ffdhe3072" 00:10:35.890 } 00:10:35.890 } 00:10:35.890 ]' 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.890 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.148 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:10:37.082 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.083 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.341 00:10:37.341 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.341 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.341 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.600 { 00:10:37.600 "cntlid": 25, 00:10:37.600 "qid": 0, 00:10:37.600 "state": "enabled", 00:10:37.600 "thread": "nvmf_tgt_poll_group_000", 00:10:37.600 "listen_address": { 00:10:37.600 "trtype": "TCP", 00:10:37.600 "adrfam": "IPv4", 00:10:37.600 "traddr": "10.0.0.2", 00:10:37.600 "trsvcid": "4420" 00:10:37.600 }, 00:10:37.600 "peer_address": { 00:10:37.600 "trtype": "TCP", 00:10:37.600 "adrfam": "IPv4", 00:10:37.600 "traddr": "10.0.0.1", 00:10:37.600 "trsvcid": "55878" 00:10:37.600 }, 00:10:37.600 "auth": { 00:10:37.600 "state": "completed", 00:10:37.600 "digest": "sha256", 00:10:37.600 "dhgroup": "ffdhe4096" 00:10:37.600 } 00:10:37.600 } 00:10:37.600 ]' 00:10:37.600 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.861 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.122 07:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:10:38.688 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.689 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.947 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.515 00:10:39.515 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.515 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.515 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.774 { 00:10:39.774 "cntlid": 27, 00:10:39.774 "qid": 0, 00:10:39.774 "state": "enabled", 00:10:39.774 "thread": "nvmf_tgt_poll_group_000", 00:10:39.774 "listen_address": { 00:10:39.774 "trtype": "TCP", 00:10:39.774 "adrfam": "IPv4", 00:10:39.774 "traddr": "10.0.0.2", 00:10:39.774 "trsvcid": "4420" 00:10:39.774 }, 00:10:39.774 "peer_address": { 00:10:39.774 "trtype": "TCP", 00:10:39.774 "adrfam": "IPv4", 00:10:39.774 "traddr": "10.0.0.1", 00:10:39.774 "trsvcid": "55916" 00:10:39.774 }, 00:10:39.774 "auth": { 00:10:39.774 "state": "completed", 00:10:39.774 "digest": "sha256", 00:10:39.774 "dhgroup": "ffdhe4096" 00:10:39.774 } 00:10:39.774 } 00:10:39.774 ]' 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.774 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.033 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:40.600 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.859 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.426 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.427 { 00:10:41.427 "cntlid": 29, 00:10:41.427 "qid": 0, 00:10:41.427 "state": "enabled", 00:10:41.427 "thread": "nvmf_tgt_poll_group_000", 00:10:41.427 "listen_address": { 00:10:41.427 "trtype": "TCP", 00:10:41.427 "adrfam": "IPv4", 00:10:41.427 "traddr": "10.0.0.2", 00:10:41.427 "trsvcid": "4420" 00:10:41.427 }, 00:10:41.427 "peer_address": { 00:10:41.427 "trtype": "TCP", 00:10:41.427 "adrfam": "IPv4", 00:10:41.427 "traddr": "10.0.0.1", 00:10:41.427 "trsvcid": "55934" 00:10:41.427 }, 00:10:41.427 "auth": { 00:10:41.427 "state": "completed", 00:10:41.427 "digest": "sha256", 00:10:41.427 "dhgroup": "ffdhe4096" 00:10:41.427 } 00:10:41.427 } 00:10:41.427 ]' 00:10:41.427 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.685 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.966 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:42.534 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:42.793 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:42.793 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.793 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.793 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:42.793 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.794 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:43.362 00:10:43.362 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.362 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.362 07:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.623 { 00:10:43.623 "cntlid": 31, 00:10:43.623 "qid": 0, 00:10:43.623 "state": "enabled", 00:10:43.623 "thread": "nvmf_tgt_poll_group_000", 00:10:43.623 "listen_address": { 00:10:43.623 "trtype": "TCP", 00:10:43.623 "adrfam": "IPv4", 00:10:43.623 "traddr": "10.0.0.2", 00:10:43.623 "trsvcid": "4420" 00:10:43.623 }, 00:10:43.623 "peer_address": { 00:10:43.623 "trtype": "TCP", 00:10:43.623 "adrfam": "IPv4", 00:10:43.623 "traddr": "10.0.0.1", 00:10:43.623 "trsvcid": "55960" 00:10:43.623 }, 00:10:43.623 "auth": { 00:10:43.623 "state": "completed", 00:10:43.623 "digest": "sha256", 00:10:43.623 "dhgroup": "ffdhe4096" 00:10:43.623 } 00:10:43.623 } 00:10:43.623 ]' 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.623 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.883 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:10:44.450 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.450 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.708 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.709 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.276 00:10:45.276 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.276 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.276 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.534 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.534 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.534 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.534 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.534 { 00:10:45.534 "cntlid": 33, 00:10:45.534 "qid": 0, 00:10:45.534 "state": "enabled", 00:10:45.534 "thread": "nvmf_tgt_poll_group_000", 00:10:45.534 "listen_address": { 00:10:45.534 "trtype": "TCP", 00:10:45.534 "adrfam": "IPv4", 00:10:45.534 "traddr": "10.0.0.2", 00:10:45.534 "trsvcid": "4420" 00:10:45.534 }, 00:10:45.534 "peer_address": { 00:10:45.534 "trtype": "TCP", 00:10:45.534 "adrfam": "IPv4", 00:10:45.534 "traddr": "10.0.0.1", 00:10:45.534 "trsvcid": "47934" 00:10:45.534 }, 00:10:45.534 "auth": { 00:10:45.534 "state": "completed", 00:10:45.534 "digest": "sha256", 00:10:45.534 "dhgroup": "ffdhe6144" 00:10:45.534 } 00:10:45.534 } 00:10:45.534 ]' 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:45.534 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.794 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.794 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.794 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.051 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.616 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.874 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.438 00:10:47.438 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.438 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.438 07:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.696 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.696 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.696 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.696 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.696 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.696 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.697 { 00:10:47.697 "cntlid": 35, 00:10:47.697 "qid": 0, 00:10:47.697 "state": "enabled", 00:10:47.697 "thread": "nvmf_tgt_poll_group_000", 00:10:47.697 "listen_address": { 00:10:47.697 "trtype": "TCP", 00:10:47.697 "adrfam": "IPv4", 00:10:47.697 "traddr": "10.0.0.2", 00:10:47.697 "trsvcid": "4420" 00:10:47.697 }, 00:10:47.697 "peer_address": { 00:10:47.697 "trtype": "TCP", 00:10:47.697 "adrfam": "IPv4", 00:10:47.697 "traddr": "10.0.0.1", 00:10:47.697 "trsvcid": "47966" 00:10:47.697 }, 00:10:47.697 "auth": { 00:10:47.697 "state": "completed", 00:10:47.697 "digest": "sha256", 00:10:47.697 "dhgroup": "ffdhe6144" 00:10:47.697 } 00:10:47.697 } 00:10:47.697 ]' 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.697 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.998 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.564 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.823 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.389 00:10:49.389 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.389 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.389 07:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.647 { 00:10:49.647 "cntlid": 37, 00:10:49.647 "qid": 0, 00:10:49.647 "state": "enabled", 00:10:49.647 "thread": "nvmf_tgt_poll_group_000", 00:10:49.647 "listen_address": { 00:10:49.647 "trtype": "TCP", 00:10:49.647 "adrfam": "IPv4", 00:10:49.647 "traddr": "10.0.0.2", 00:10:49.647 "trsvcid": "4420" 00:10:49.647 }, 00:10:49.647 "peer_address": { 00:10:49.647 "trtype": "TCP", 00:10:49.647 "adrfam": "IPv4", 00:10:49.647 "traddr": "10.0.0.1", 00:10:49.647 "trsvcid": "47984" 00:10:49.647 }, 00:10:49.647 "auth": { 00:10:49.647 "state": "completed", 00:10:49.647 "digest": "sha256", 00:10:49.647 "dhgroup": "ffdhe6144" 00:10:49.647 } 00:10:49.647 } 00:10:49.647 ]' 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.647 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.905 07:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.840 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.406 00:10:51.406 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.406 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.406 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.664 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.664 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.664 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.664 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.664 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.664 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.664 { 00:10:51.664 "cntlid": 39, 00:10:51.664 "qid": 0, 00:10:51.664 "state": "enabled", 00:10:51.664 "thread": "nvmf_tgt_poll_group_000", 00:10:51.665 "listen_address": { 00:10:51.665 "trtype": "TCP", 00:10:51.665 "adrfam": "IPv4", 00:10:51.665 "traddr": "10.0.0.2", 00:10:51.665 "trsvcid": "4420" 00:10:51.665 }, 00:10:51.665 "peer_address": { 00:10:51.665 "trtype": "TCP", 00:10:51.665 "adrfam": "IPv4", 00:10:51.665 "traddr": "10.0.0.1", 00:10:51.665 "trsvcid": "48000" 00:10:51.665 }, 00:10:51.665 "auth": { 00:10:51.665 "state": "completed", 00:10:51.665 "digest": "sha256", 00:10:51.665 "dhgroup": "ffdhe6144" 00:10:51.665 } 00:10:51.665 } 00:10:51.665 ]' 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.665 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.923 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.859 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.117 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.685 00:10:53.685 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.685 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.685 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.944 { 00:10:53.944 "cntlid": 41, 00:10:53.944 "qid": 0, 00:10:53.944 "state": "enabled", 00:10:53.944 "thread": "nvmf_tgt_poll_group_000", 00:10:53.944 "listen_address": { 00:10:53.944 "trtype": "TCP", 00:10:53.944 "adrfam": "IPv4", 00:10:53.944 "traddr": "10.0.0.2", 00:10:53.944 "trsvcid": "4420" 00:10:53.944 }, 00:10:53.944 "peer_address": { 00:10:53.944 "trtype": "TCP", 00:10:53.944 "adrfam": "IPv4", 00:10:53.944 "traddr": "10.0.0.1", 00:10:53.944 "trsvcid": "48028" 00:10:53.944 }, 00:10:53.944 "auth": { 00:10:53.944 "state": "completed", 00:10:53.944 "digest": "sha256", 00:10:53.944 "dhgroup": "ffdhe8192" 00:10:53.944 } 00:10:53.944 } 00:10:53.944 ]' 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.944 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.203 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.203 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.203 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.462 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.029 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.288 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.855 00:10:55.855 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.855 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.855 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.114 { 00:10:56.114 "cntlid": 43, 00:10:56.114 "qid": 0, 00:10:56.114 "state": "enabled", 00:10:56.114 "thread": "nvmf_tgt_poll_group_000", 00:10:56.114 "listen_address": { 00:10:56.114 "trtype": "TCP", 00:10:56.114 "adrfam": "IPv4", 00:10:56.114 "traddr": "10.0.0.2", 00:10:56.114 "trsvcid": "4420" 00:10:56.114 }, 00:10:56.114 "peer_address": { 00:10:56.114 "trtype": "TCP", 00:10:56.114 "adrfam": "IPv4", 00:10:56.114 "traddr": "10.0.0.1", 00:10:56.114 "trsvcid": "46372" 00:10:56.114 }, 00:10:56.114 "auth": { 00:10:56.114 "state": "completed", 00:10:56.114 "digest": "sha256", 00:10:56.114 "dhgroup": "ffdhe8192" 00:10:56.114 } 00:10:56.114 } 00:10:56.114 ]' 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.114 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.372 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.372 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.372 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.631 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.199 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.458 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.026 00:10:58.026 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.026 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.026 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.284 { 00:10:58.284 "cntlid": 45, 00:10:58.284 "qid": 0, 00:10:58.284 "state": "enabled", 00:10:58.284 "thread": "nvmf_tgt_poll_group_000", 00:10:58.284 "listen_address": { 00:10:58.284 "trtype": "TCP", 00:10:58.284 "adrfam": "IPv4", 00:10:58.284 "traddr": "10.0.0.2", 00:10:58.284 "trsvcid": "4420" 00:10:58.284 }, 00:10:58.284 "peer_address": { 00:10:58.284 "trtype": "TCP", 00:10:58.284 "adrfam": "IPv4", 00:10:58.284 "traddr": "10.0.0.1", 00:10:58.284 "trsvcid": "46388" 00:10:58.284 }, 00:10:58.284 "auth": { 00:10:58.284 "state": "completed", 00:10:58.284 "digest": "sha256", 00:10:58.284 "dhgroup": "ffdhe8192" 00:10:58.284 } 00:10:58.284 } 00:10:58.284 ]' 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.284 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.542 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.542 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.542 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.542 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.542 07:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.801 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:10:59.367 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.367 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:10:59.367 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.367 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.367 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.367 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.368 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.368 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.626 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.562 00:11:00.562 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.562 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.562 07:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.562 { 00:11:00.562 "cntlid": 47, 00:11:00.562 "qid": 0, 00:11:00.562 "state": "enabled", 00:11:00.562 "thread": "nvmf_tgt_poll_group_000", 00:11:00.562 "listen_address": { 00:11:00.562 "trtype": "TCP", 00:11:00.562 "adrfam": "IPv4", 00:11:00.562 "traddr": "10.0.0.2", 00:11:00.562 "trsvcid": "4420" 00:11:00.562 }, 00:11:00.562 "peer_address": { 00:11:00.562 "trtype": "TCP", 00:11:00.562 "adrfam": "IPv4", 00:11:00.562 "traddr": "10.0.0.1", 00:11:00.562 "trsvcid": "46404" 00:11:00.562 }, 00:11:00.562 "auth": { 00:11:00.562 "state": "completed", 00:11:00.562 "digest": "sha256", 00:11:00.562 "dhgroup": "ffdhe8192" 00:11:00.562 } 00:11:00.562 } 00:11:00.562 ]' 00:11:00.562 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.821 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.080 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.662 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.945 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.203 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.203 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.203 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.462 00:11:02.462 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.462 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.462 07:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.721 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.721 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.721 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.721 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.721 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.721 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.721 { 00:11:02.721 "cntlid": 49, 00:11:02.721 "qid": 0, 00:11:02.721 "state": "enabled", 00:11:02.721 "thread": "nvmf_tgt_poll_group_000", 00:11:02.721 "listen_address": { 00:11:02.721 "trtype": "TCP", 00:11:02.721 "adrfam": "IPv4", 00:11:02.721 "traddr": "10.0.0.2", 00:11:02.721 "trsvcid": "4420" 00:11:02.721 }, 00:11:02.721 "peer_address": { 00:11:02.721 "trtype": "TCP", 00:11:02.721 "adrfam": "IPv4", 00:11:02.722 "traddr": "10.0.0.1", 00:11:02.722 "trsvcid": "46432" 00:11:02.722 }, 00:11:02.722 "auth": { 00:11:02.722 "state": "completed", 00:11:02.722 "digest": "sha384", 00:11:02.722 "dhgroup": "null" 00:11:02.722 } 00:11:02.722 } 00:11:02.722 ]' 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.722 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.980 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.547 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.806 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.374 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.374 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.633 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.634 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.634 { 00:11:04.634 "cntlid": 51, 00:11:04.634 "qid": 0, 00:11:04.634 "state": "enabled", 00:11:04.634 "thread": "nvmf_tgt_poll_group_000", 00:11:04.634 "listen_address": { 00:11:04.634 "trtype": "TCP", 00:11:04.634 "adrfam": "IPv4", 00:11:04.634 "traddr": "10.0.0.2", 00:11:04.634 "trsvcid": "4420" 00:11:04.634 }, 00:11:04.634 "peer_address": { 00:11:04.634 "trtype": "TCP", 00:11:04.634 "adrfam": "IPv4", 00:11:04.634 "traddr": "10.0.0.1", 00:11:04.634 "trsvcid": "37192" 00:11:04.634 }, 00:11:04.634 "auth": { 00:11:04.634 "state": "completed", 00:11:04.634 "digest": "sha384", 00:11:04.634 "dhgroup": "null" 00:11:04.634 } 00:11:04.634 } 00:11:04.634 ]' 00:11:04.634 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.634 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.893 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.460 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.719 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.286 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.287 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.546 { 00:11:06.546 "cntlid": 53, 00:11:06.546 "qid": 0, 00:11:06.546 "state": "enabled", 00:11:06.546 "thread": "nvmf_tgt_poll_group_000", 00:11:06.546 "listen_address": { 00:11:06.546 "trtype": "TCP", 00:11:06.546 "adrfam": "IPv4", 00:11:06.546 "traddr": "10.0.0.2", 00:11:06.546 "trsvcid": "4420" 00:11:06.546 }, 00:11:06.546 "peer_address": { 00:11:06.546 "trtype": "TCP", 00:11:06.546 "adrfam": "IPv4", 00:11:06.546 "traddr": "10.0.0.1", 00:11:06.546 "trsvcid": "37218" 00:11:06.546 }, 00:11:06.546 "auth": { 00:11:06.546 "state": "completed", 00:11:06.546 "digest": "sha384", 00:11:06.546 "dhgroup": "null" 00:11:06.546 } 00:11:06.546 } 00:11:06.546 ]' 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:06.546 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.546 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.546 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.546 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.804 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.371 07:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.629 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.888 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.888 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.888 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.147 00:11:08.147 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.147 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.147 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.405 { 00:11:08.405 "cntlid": 55, 00:11:08.405 "qid": 0, 00:11:08.405 "state": "enabled", 00:11:08.405 "thread": "nvmf_tgt_poll_group_000", 00:11:08.405 "listen_address": { 00:11:08.405 "trtype": "TCP", 00:11:08.405 "adrfam": "IPv4", 00:11:08.405 "traddr": "10.0.0.2", 00:11:08.405 "trsvcid": "4420" 00:11:08.405 }, 00:11:08.405 "peer_address": { 00:11:08.405 "trtype": "TCP", 00:11:08.405 "adrfam": "IPv4", 00:11:08.405 "traddr": "10.0.0.1", 00:11:08.405 "trsvcid": "37256" 00:11:08.405 }, 00:11:08.405 "auth": { 00:11:08.405 "state": "completed", 00:11:08.405 "digest": "sha384", 00:11:08.405 "dhgroup": "null" 00:11:08.405 } 00:11:08.405 } 00:11:08.405 ]' 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.405 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.406 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.406 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:08.406 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.406 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.406 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.406 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.664 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.231 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:09.232 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.490 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.058 00:11:10.058 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.058 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.058 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.317 { 00:11:10.317 "cntlid": 57, 00:11:10.317 "qid": 0, 00:11:10.317 "state": "enabled", 00:11:10.317 "thread": "nvmf_tgt_poll_group_000", 00:11:10.317 "listen_address": { 00:11:10.317 "trtype": "TCP", 00:11:10.317 "adrfam": "IPv4", 00:11:10.317 "traddr": "10.0.0.2", 00:11:10.317 "trsvcid": "4420" 00:11:10.317 }, 00:11:10.317 "peer_address": { 00:11:10.317 "trtype": "TCP", 00:11:10.317 "adrfam": "IPv4", 00:11:10.317 "traddr": "10.0.0.1", 00:11:10.317 "trsvcid": "37290" 00:11:10.317 }, 00:11:10.317 "auth": { 00:11:10.317 "state": "completed", 00:11:10.317 "digest": "sha384", 00:11:10.317 "dhgroup": "ffdhe2048" 00:11:10.317 } 00:11:10.317 } 00:11:10.317 ]' 00:11:10.317 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.318 07:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.576 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.512 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.512 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.770 00:11:11.770 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.770 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.770 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.028 { 00:11:12.028 "cntlid": 59, 00:11:12.028 "qid": 0, 00:11:12.028 "state": "enabled", 00:11:12.028 "thread": "nvmf_tgt_poll_group_000", 00:11:12.028 "listen_address": { 00:11:12.028 "trtype": "TCP", 00:11:12.028 "adrfam": "IPv4", 00:11:12.028 "traddr": "10.0.0.2", 00:11:12.028 "trsvcid": "4420" 00:11:12.028 }, 00:11:12.028 "peer_address": { 00:11:12.028 "trtype": "TCP", 00:11:12.028 "adrfam": "IPv4", 00:11:12.028 "traddr": "10.0.0.1", 00:11:12.028 "trsvcid": "37314" 00:11:12.028 }, 00:11:12.028 "auth": { 00:11:12.028 "state": "completed", 00:11:12.028 "digest": "sha384", 00:11:12.028 "dhgroup": "ffdhe2048" 00:11:12.028 } 00:11:12.028 } 00:11:12.028 ]' 00:11:12.028 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.286 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.543 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:13.109 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.367 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.640 00:11:13.640 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.640 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.640 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.898 { 00:11:13.898 "cntlid": 61, 00:11:13.898 "qid": 0, 00:11:13.898 "state": "enabled", 00:11:13.898 "thread": "nvmf_tgt_poll_group_000", 00:11:13.898 "listen_address": { 00:11:13.898 "trtype": "TCP", 00:11:13.898 "adrfam": "IPv4", 00:11:13.898 "traddr": "10.0.0.2", 00:11:13.898 "trsvcid": "4420" 00:11:13.898 }, 00:11:13.898 "peer_address": { 00:11:13.898 "trtype": "TCP", 00:11:13.898 "adrfam": "IPv4", 00:11:13.898 "traddr": "10.0.0.1", 00:11:13.898 "trsvcid": "42938" 00:11:13.898 }, 00:11:13.898 "auth": { 00:11:13.898 "state": "completed", 00:11:13.898 "digest": "sha384", 00:11:13.898 "dhgroup": "ffdhe2048" 00:11:13.898 } 00:11:13.898 } 00:11:13.898 ]' 00:11:13.898 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.156 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.415 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.980 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.239 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.497 00:11:15.497 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.497 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.497 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.755 { 00:11:15.755 "cntlid": 63, 00:11:15.755 "qid": 0, 00:11:15.755 "state": "enabled", 00:11:15.755 "thread": "nvmf_tgt_poll_group_000", 00:11:15.755 "listen_address": { 00:11:15.755 "trtype": "TCP", 00:11:15.755 "adrfam": "IPv4", 00:11:15.755 "traddr": "10.0.0.2", 00:11:15.755 "trsvcid": "4420" 00:11:15.755 }, 00:11:15.755 "peer_address": { 00:11:15.755 "trtype": "TCP", 00:11:15.755 "adrfam": "IPv4", 00:11:15.755 "traddr": "10.0.0.1", 00:11:15.755 "trsvcid": "42956" 00:11:15.755 }, 00:11:15.755 "auth": { 00:11:15.755 "state": "completed", 00:11:15.755 "digest": "sha384", 00:11:15.755 "dhgroup": "ffdhe2048" 00:11:15.755 } 00:11:15.755 } 00:11:15.755 ]' 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.755 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.014 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.014 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.014 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.014 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.014 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.272 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:16.839 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.098 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.357 00:11:17.357 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.357 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.357 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.616 { 00:11:17.616 "cntlid": 65, 00:11:17.616 "qid": 0, 00:11:17.616 "state": "enabled", 00:11:17.616 "thread": "nvmf_tgt_poll_group_000", 00:11:17.616 "listen_address": { 00:11:17.616 "trtype": "TCP", 00:11:17.616 "adrfam": "IPv4", 00:11:17.616 "traddr": "10.0.0.2", 00:11:17.616 "trsvcid": "4420" 00:11:17.616 }, 00:11:17.616 "peer_address": { 00:11:17.616 "trtype": "TCP", 00:11:17.616 "adrfam": "IPv4", 00:11:17.616 "traddr": "10.0.0.1", 00:11:17.616 "trsvcid": "42982" 00:11:17.616 }, 00:11:17.616 "auth": { 00:11:17.616 "state": "completed", 00:11:17.616 "digest": "sha384", 00:11:17.616 "dhgroup": "ffdhe3072" 00:11:17.616 } 00:11:17.616 } 00:11:17.616 ]' 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.616 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.886 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.886 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.886 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.886 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.886 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.146 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.712 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.970 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.537 00:11:19.537 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.537 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.537 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.537 { 00:11:19.537 "cntlid": 67, 00:11:19.537 "qid": 0, 00:11:19.537 "state": "enabled", 00:11:19.537 "thread": "nvmf_tgt_poll_group_000", 00:11:19.537 "listen_address": { 00:11:19.537 "trtype": "TCP", 00:11:19.537 "adrfam": "IPv4", 00:11:19.537 "traddr": "10.0.0.2", 00:11:19.537 "trsvcid": "4420" 00:11:19.537 }, 00:11:19.537 "peer_address": { 00:11:19.537 "trtype": "TCP", 00:11:19.537 "adrfam": "IPv4", 00:11:19.537 "traddr": "10.0.0.1", 00:11:19.537 "trsvcid": "43014" 00:11:19.537 }, 00:11:19.537 "auth": { 00:11:19.537 "state": "completed", 00:11:19.537 "digest": "sha384", 00:11:19.537 "dhgroup": "ffdhe3072" 00:11:19.537 } 00:11:19.537 } 00:11:19.537 ]' 00:11:19.537 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.796 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.054 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.621 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.879 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.137 00:11:21.137 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.137 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.137 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.704 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.704 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.704 { 00:11:21.704 "cntlid": 69, 00:11:21.704 "qid": 0, 00:11:21.704 "state": "enabled", 00:11:21.704 "thread": "nvmf_tgt_poll_group_000", 00:11:21.704 "listen_address": { 00:11:21.704 "trtype": "TCP", 00:11:21.704 "adrfam": "IPv4", 00:11:21.704 "traddr": "10.0.0.2", 00:11:21.704 "trsvcid": "4420" 00:11:21.704 }, 00:11:21.704 "peer_address": { 00:11:21.704 "trtype": "TCP", 00:11:21.704 "adrfam": "IPv4", 00:11:21.704 "traddr": "10.0.0.1", 00:11:21.704 "trsvcid": "43038" 00:11:21.704 }, 00:11:21.704 "auth": { 00:11:21.704 "state": "completed", 00:11:21.704 "digest": "sha384", 00:11:21.704 "dhgroup": "ffdhe3072" 00:11:21.704 } 00:11:21.704 } 00:11:21.704 ]' 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.704 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.963 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.529 07:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:22.788 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.046 00:11:23.046 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.046 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.046 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.304 { 00:11:23.304 "cntlid": 71, 00:11:23.304 "qid": 0, 00:11:23.304 "state": "enabled", 00:11:23.304 "thread": "nvmf_tgt_poll_group_000", 00:11:23.304 "listen_address": { 00:11:23.304 "trtype": "TCP", 00:11:23.304 "adrfam": "IPv4", 00:11:23.304 "traddr": "10.0.0.2", 00:11:23.304 "trsvcid": "4420" 00:11:23.304 }, 00:11:23.304 "peer_address": { 00:11:23.304 "trtype": "TCP", 00:11:23.304 "adrfam": "IPv4", 00:11:23.304 "traddr": "10.0.0.1", 00:11:23.304 "trsvcid": "43064" 00:11:23.304 }, 00:11:23.304 "auth": { 00:11:23.304 "state": "completed", 00:11:23.304 "digest": "sha384", 00:11:23.304 "dhgroup": "ffdhe3072" 00:11:23.304 } 00:11:23.304 } 00:11:23.304 ]' 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.304 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.562 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.562 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.562 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.562 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.562 07:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.820 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.387 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.646 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.904 00:11:24.904 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.904 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.904 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.163 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.163 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.163 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.163 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.421 { 00:11:25.421 "cntlid": 73, 00:11:25.421 "qid": 0, 00:11:25.421 "state": "enabled", 00:11:25.421 "thread": "nvmf_tgt_poll_group_000", 00:11:25.421 "listen_address": { 00:11:25.421 "trtype": "TCP", 00:11:25.421 "adrfam": "IPv4", 00:11:25.421 "traddr": "10.0.0.2", 00:11:25.421 "trsvcid": "4420" 00:11:25.421 }, 00:11:25.421 "peer_address": { 00:11:25.421 "trtype": "TCP", 00:11:25.421 "adrfam": "IPv4", 00:11:25.421 "traddr": "10.0.0.1", 00:11:25.421 "trsvcid": "42360" 00:11:25.421 }, 00:11:25.421 "auth": { 00:11:25.421 "state": "completed", 00:11:25.421 "digest": "sha384", 00:11:25.421 "dhgroup": "ffdhe4096" 00:11:25.421 } 00:11:25.421 } 00:11:25.421 ]' 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.421 07:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.679 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.247 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.506 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.765 00:11:27.023 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.023 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.023 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.282 { 00:11:27.282 "cntlid": 75, 00:11:27.282 "qid": 0, 00:11:27.282 "state": "enabled", 00:11:27.282 "thread": "nvmf_tgt_poll_group_000", 00:11:27.282 "listen_address": { 00:11:27.282 "trtype": "TCP", 00:11:27.282 "adrfam": "IPv4", 00:11:27.282 "traddr": "10.0.0.2", 00:11:27.282 "trsvcid": "4420" 00:11:27.282 }, 00:11:27.282 "peer_address": { 00:11:27.282 "trtype": "TCP", 00:11:27.282 "adrfam": "IPv4", 00:11:27.282 "traddr": "10.0.0.1", 00:11:27.282 "trsvcid": "42392" 00:11:27.282 }, 00:11:27.282 "auth": { 00:11:27.282 "state": "completed", 00:11:27.282 "digest": "sha384", 00:11:27.282 "dhgroup": "ffdhe4096" 00:11:27.282 } 00:11:27.282 } 00:11:27.282 ]' 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.282 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.541 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:28.108 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.365 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.641 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.641 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.641 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.906 00:11:28.906 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.906 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.906 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.164 { 00:11:29.164 "cntlid": 77, 00:11:29.164 "qid": 0, 00:11:29.164 "state": "enabled", 00:11:29.164 "thread": "nvmf_tgt_poll_group_000", 00:11:29.164 "listen_address": { 00:11:29.164 "trtype": "TCP", 00:11:29.164 "adrfam": "IPv4", 00:11:29.164 "traddr": "10.0.0.2", 00:11:29.164 "trsvcid": "4420" 00:11:29.164 }, 00:11:29.164 "peer_address": { 00:11:29.164 "trtype": "TCP", 00:11:29.164 "adrfam": "IPv4", 00:11:29.164 "traddr": "10.0.0.1", 00:11:29.164 "trsvcid": "42424" 00:11:29.164 }, 00:11:29.164 "auth": { 00:11:29.164 "state": "completed", 00:11:29.164 "digest": "sha384", 00:11:29.164 "dhgroup": "ffdhe4096" 00:11:29.164 } 00:11:29.164 } 00:11:29.164 ]' 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.164 07:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.423 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.360 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:30.619 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:30.878 00:11:30.878 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.878 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.878 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.137 { 00:11:31.137 "cntlid": 79, 00:11:31.137 "qid": 0, 00:11:31.137 "state": "enabled", 00:11:31.137 "thread": "nvmf_tgt_poll_group_000", 00:11:31.137 "listen_address": { 00:11:31.137 "trtype": "TCP", 00:11:31.137 "adrfam": "IPv4", 00:11:31.137 "traddr": "10.0.0.2", 00:11:31.137 "trsvcid": "4420" 00:11:31.137 }, 00:11:31.137 "peer_address": { 00:11:31.137 "trtype": "TCP", 00:11:31.137 "adrfam": "IPv4", 00:11:31.137 "traddr": "10.0.0.1", 00:11:31.137 "trsvcid": "42446" 00:11:31.137 }, 00:11:31.137 "auth": { 00:11:31.137 "state": "completed", 00:11:31.137 "digest": "sha384", 00:11:31.137 "dhgroup": "ffdhe4096" 00:11:31.137 } 00:11:31.137 } 00:11:31.137 ]' 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.137 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.396 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.396 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.396 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.396 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.396 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.655 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.222 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.482 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.740 00:11:32.999 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.999 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.999 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.259 { 00:11:33.259 "cntlid": 81, 00:11:33.259 "qid": 0, 00:11:33.259 "state": "enabled", 00:11:33.259 "thread": "nvmf_tgt_poll_group_000", 00:11:33.259 "listen_address": { 00:11:33.259 "trtype": "TCP", 00:11:33.259 "adrfam": "IPv4", 00:11:33.259 "traddr": "10.0.0.2", 00:11:33.259 "trsvcid": "4420" 00:11:33.259 }, 00:11:33.259 "peer_address": { 00:11:33.259 "trtype": "TCP", 00:11:33.259 "adrfam": "IPv4", 00:11:33.259 "traddr": "10.0.0.1", 00:11:33.259 "trsvcid": "42474" 00:11:33.259 }, 00:11:33.259 "auth": { 00:11:33.259 "state": "completed", 00:11:33.259 "digest": "sha384", 00:11:33.259 "dhgroup": "ffdhe6144" 00:11:33.259 } 00:11:33.259 } 00:11:33.259 ]' 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.259 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.517 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:34.085 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.344 07:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.912 00:11:34.912 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.912 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.912 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.171 { 00:11:35.171 "cntlid": 83, 00:11:35.171 "qid": 0, 00:11:35.171 "state": "enabled", 00:11:35.171 "thread": "nvmf_tgt_poll_group_000", 00:11:35.171 "listen_address": { 00:11:35.171 "trtype": "TCP", 00:11:35.171 "adrfam": "IPv4", 00:11:35.171 "traddr": "10.0.0.2", 00:11:35.171 "trsvcid": "4420" 00:11:35.171 }, 00:11:35.171 "peer_address": { 00:11:35.171 "trtype": "TCP", 00:11:35.171 "adrfam": "IPv4", 00:11:35.171 "traddr": "10.0.0.1", 00:11:35.171 "trsvcid": "39490" 00:11:35.171 }, 00:11:35.171 "auth": { 00:11:35.171 "state": "completed", 00:11:35.171 "digest": "sha384", 00:11:35.171 "dhgroup": "ffdhe6144" 00:11:35.171 } 00:11:35.171 } 00:11:35.171 ]' 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.171 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.738 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.306 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.564 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.564 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.564 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.564 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.822 00:11:36.822 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.822 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.822 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.081 { 00:11:37.081 "cntlid": 85, 00:11:37.081 "qid": 0, 00:11:37.081 "state": "enabled", 00:11:37.081 "thread": "nvmf_tgt_poll_group_000", 00:11:37.081 "listen_address": { 00:11:37.081 "trtype": "TCP", 00:11:37.081 "adrfam": "IPv4", 00:11:37.081 "traddr": "10.0.0.2", 00:11:37.081 "trsvcid": "4420" 00:11:37.081 }, 00:11:37.081 "peer_address": { 00:11:37.081 "trtype": "TCP", 00:11:37.081 "adrfam": "IPv4", 00:11:37.081 "traddr": "10.0.0.1", 00:11:37.081 "trsvcid": "39528" 00:11:37.081 }, 00:11:37.081 "auth": { 00:11:37.081 "state": "completed", 00:11:37.081 "digest": "sha384", 00:11:37.081 "dhgroup": "ffdhe6144" 00:11:37.081 } 00:11:37.081 } 00:11:37.081 ]' 00:11:37.081 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.339 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.598 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.164 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.421 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:38.421 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.421 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.421 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:38.422 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:38.422 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.422 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:38.422 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.422 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.422 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.422 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.422 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.988 00:11:38.988 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.988 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.988 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.246 { 00:11:39.246 "cntlid": 87, 00:11:39.246 "qid": 0, 00:11:39.246 "state": "enabled", 00:11:39.246 "thread": "nvmf_tgt_poll_group_000", 00:11:39.246 "listen_address": { 00:11:39.246 "trtype": "TCP", 00:11:39.246 "adrfam": "IPv4", 00:11:39.246 "traddr": "10.0.0.2", 00:11:39.246 "trsvcid": "4420" 00:11:39.246 }, 00:11:39.246 "peer_address": { 00:11:39.246 "trtype": "TCP", 00:11:39.246 "adrfam": "IPv4", 00:11:39.246 "traddr": "10.0.0.1", 00:11:39.246 "trsvcid": "39558" 00:11:39.246 }, 00:11:39.246 "auth": { 00:11:39.246 "state": "completed", 00:11:39.246 "digest": "sha384", 00:11:39.246 "dhgroup": "ffdhe6144" 00:11:39.246 } 00:11:39.246 } 00:11:39.246 ]' 00:11:39.246 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.247 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.247 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.247 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:39.247 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.505 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.505 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.505 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.763 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.328 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.586 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.587 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.153 00:11:41.153 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.153 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.153 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.412 { 00:11:41.412 "cntlid": 89, 00:11:41.412 "qid": 0, 00:11:41.412 "state": "enabled", 00:11:41.412 "thread": "nvmf_tgt_poll_group_000", 00:11:41.412 "listen_address": { 00:11:41.412 "trtype": "TCP", 00:11:41.412 "adrfam": "IPv4", 00:11:41.412 "traddr": "10.0.0.2", 00:11:41.412 "trsvcid": "4420" 00:11:41.412 }, 00:11:41.412 "peer_address": { 00:11:41.412 "trtype": "TCP", 00:11:41.412 "adrfam": "IPv4", 00:11:41.412 "traddr": "10.0.0.1", 00:11:41.412 "trsvcid": "39590" 00:11:41.412 }, 00:11:41.412 "auth": { 00:11:41.412 "state": "completed", 00:11:41.412 "digest": "sha384", 00:11:41.412 "dhgroup": "ffdhe8192" 00:11:41.412 } 00:11:41.412 } 00:11:41.412 ]' 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.412 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.670 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.604 07:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.604 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.169 00:11:43.169 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.169 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.169 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.426 { 00:11:43.426 "cntlid": 91, 00:11:43.426 "qid": 0, 00:11:43.426 "state": "enabled", 00:11:43.426 "thread": "nvmf_tgt_poll_group_000", 00:11:43.426 "listen_address": { 00:11:43.426 "trtype": "TCP", 00:11:43.426 "adrfam": "IPv4", 00:11:43.426 "traddr": "10.0.0.2", 00:11:43.426 "trsvcid": "4420" 00:11:43.426 }, 00:11:43.426 "peer_address": { 00:11:43.426 "trtype": "TCP", 00:11:43.426 "adrfam": "IPv4", 00:11:43.426 "traddr": "10.0.0.1", 00:11:43.426 "trsvcid": "39614" 00:11:43.426 }, 00:11:43.426 "auth": { 00:11:43.426 "state": "completed", 00:11:43.426 "digest": "sha384", 00:11:43.426 "dhgroup": "ffdhe8192" 00:11:43.426 } 00:11:43.426 } 00:11:43.426 ]' 00:11:43.426 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.684 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.942 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.507 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.765 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.698 00:11:45.698 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.698 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.698 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.699 { 00:11:45.699 "cntlid": 93, 00:11:45.699 "qid": 0, 00:11:45.699 "state": "enabled", 00:11:45.699 "thread": "nvmf_tgt_poll_group_000", 00:11:45.699 "listen_address": { 00:11:45.699 "trtype": "TCP", 00:11:45.699 "adrfam": "IPv4", 00:11:45.699 "traddr": "10.0.0.2", 00:11:45.699 "trsvcid": "4420" 00:11:45.699 }, 00:11:45.699 "peer_address": { 00:11:45.699 "trtype": "TCP", 00:11:45.699 "adrfam": "IPv4", 00:11:45.699 "traddr": "10.0.0.1", 00:11:45.699 "trsvcid": "34686" 00:11:45.699 }, 00:11:45.699 "auth": { 00:11:45.699 "state": "completed", 00:11:45.699 "digest": "sha384", 00:11:45.699 "dhgroup": "ffdhe8192" 00:11:45.699 } 00:11:45.699 } 00:11:45.699 ]' 00:11:45.699 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.957 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.216 07:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:46.781 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:47.040 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.299 07:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.875 00:11:47.876 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.876 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.876 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.136 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.136 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.136 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.136 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.136 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.136 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.136 { 00:11:48.136 "cntlid": 95, 00:11:48.136 "qid": 0, 00:11:48.136 "state": "enabled", 00:11:48.136 "thread": "nvmf_tgt_poll_group_000", 00:11:48.136 "listen_address": { 00:11:48.137 "trtype": "TCP", 00:11:48.137 "adrfam": "IPv4", 00:11:48.137 "traddr": "10.0.0.2", 00:11:48.137 "trsvcid": "4420" 00:11:48.137 }, 00:11:48.137 "peer_address": { 00:11:48.137 "trtype": "TCP", 00:11:48.137 "adrfam": "IPv4", 00:11:48.137 "traddr": "10.0.0.1", 00:11:48.137 "trsvcid": "34704" 00:11:48.137 }, 00:11:48.137 "auth": { 00:11:48.137 "state": "completed", 00:11:48.137 "digest": "sha384", 00:11:48.137 "dhgroup": "ffdhe8192" 00:11:48.137 } 00:11:48.137 } 00:11:48.137 ]' 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.137 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.399 07:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:49.000 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:49.258 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:49.258 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.258 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.259 07:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.524 00:11:49.784 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.784 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.784 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.041 { 00:11:50.041 "cntlid": 97, 00:11:50.041 "qid": 0, 00:11:50.041 "state": "enabled", 00:11:50.041 "thread": "nvmf_tgt_poll_group_000", 00:11:50.041 "listen_address": { 00:11:50.041 "trtype": "TCP", 00:11:50.041 "adrfam": "IPv4", 00:11:50.041 "traddr": "10.0.0.2", 00:11:50.041 "trsvcid": "4420" 00:11:50.041 }, 00:11:50.041 "peer_address": { 00:11:50.041 "trtype": "TCP", 00:11:50.041 "adrfam": "IPv4", 00:11:50.041 "traddr": "10.0.0.1", 00:11:50.041 "trsvcid": "34716" 00:11:50.041 }, 00:11:50.041 "auth": { 00:11:50.041 "state": "completed", 00:11:50.041 "digest": "sha512", 00:11:50.041 "dhgroup": "null" 00:11:50.041 } 00:11:50.041 } 00:11:50.041 ]' 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.041 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.299 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.864 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.124 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.382 00:11:51.641 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.641 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.641 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.641 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.641 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.641 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.641 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.899 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.899 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.899 { 00:11:51.899 "cntlid": 99, 00:11:51.899 "qid": 0, 00:11:51.899 "state": "enabled", 00:11:51.899 "thread": "nvmf_tgt_poll_group_000", 00:11:51.900 "listen_address": { 00:11:51.900 "trtype": "TCP", 00:11:51.900 "adrfam": "IPv4", 00:11:51.900 "traddr": "10.0.0.2", 00:11:51.900 "trsvcid": "4420" 00:11:51.900 }, 00:11:51.900 "peer_address": { 00:11:51.900 "trtype": "TCP", 00:11:51.900 "adrfam": "IPv4", 00:11:51.900 "traddr": "10.0.0.1", 00:11:51.900 "trsvcid": "34734" 00:11:51.900 }, 00:11:51.900 "auth": { 00:11:51.900 "state": "completed", 00:11:51.900 "digest": "sha512", 00:11:51.900 "dhgroup": "null" 00:11:51.900 } 00:11:51.900 } 00:11:51.900 ]' 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.900 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.158 07:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.724 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:52.725 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.983 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.242 00:11:53.500 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.500 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.500 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.758 { 00:11:53.758 "cntlid": 101, 00:11:53.758 "qid": 0, 00:11:53.758 "state": "enabled", 00:11:53.758 "thread": "nvmf_tgt_poll_group_000", 00:11:53.758 "listen_address": { 00:11:53.758 "trtype": "TCP", 00:11:53.758 "adrfam": "IPv4", 00:11:53.758 "traddr": "10.0.0.2", 00:11:53.758 "trsvcid": "4420" 00:11:53.758 }, 00:11:53.758 "peer_address": { 00:11:53.758 "trtype": "TCP", 00:11:53.758 "adrfam": "IPv4", 00:11:53.758 "traddr": "10.0.0.1", 00:11:53.758 "trsvcid": "34770" 00:11:53.758 }, 00:11:53.758 "auth": { 00:11:53.758 "state": "completed", 00:11:53.758 "digest": "sha512", 00:11:53.758 "dhgroup": "null" 00:11:53.758 } 00:11:53.758 } 00:11:53.758 ]' 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.758 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.016 07:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.583 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.841 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.842 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:54.842 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:55.100 00:11:55.100 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.100 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.100 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.358 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.358 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.358 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.358 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.618 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.618 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.618 { 00:11:55.618 "cntlid": 103, 00:11:55.618 "qid": 0, 00:11:55.618 "state": "enabled", 00:11:55.618 "thread": "nvmf_tgt_poll_group_000", 00:11:55.618 "listen_address": { 00:11:55.618 "trtype": "TCP", 00:11:55.618 "adrfam": "IPv4", 00:11:55.618 "traddr": "10.0.0.2", 00:11:55.618 "trsvcid": "4420" 00:11:55.618 }, 00:11:55.618 "peer_address": { 00:11:55.618 "trtype": "TCP", 00:11:55.618 "adrfam": "IPv4", 00:11:55.618 "traddr": "10.0.0.1", 00:11:55.618 "trsvcid": "44560" 00:11:55.618 }, 00:11:55.618 "auth": { 00:11:55.618 "state": "completed", 00:11:55.618 "digest": "sha512", 00:11:55.618 "dhgroup": "null" 00:11:55.618 } 00:11:55.618 } 00:11:55.618 ]' 00:11:55.618 07:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.618 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.876 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:11:56.443 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.443 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:56.443 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.443 07:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.443 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.443 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:56.443 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.443 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:56.443 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.702 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.269 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.269 { 00:11:57.269 "cntlid": 105, 00:11:57.269 "qid": 0, 00:11:57.269 "state": "enabled", 00:11:57.269 "thread": "nvmf_tgt_poll_group_000", 00:11:57.269 "listen_address": { 00:11:57.269 "trtype": "TCP", 00:11:57.269 "adrfam": "IPv4", 00:11:57.269 "traddr": "10.0.0.2", 00:11:57.269 "trsvcid": "4420" 00:11:57.269 }, 00:11:57.269 "peer_address": { 00:11:57.269 "trtype": "TCP", 00:11:57.269 "adrfam": "IPv4", 00:11:57.269 "traddr": "10.0.0.1", 00:11:57.269 "trsvcid": "44580" 00:11:57.269 }, 00:11:57.269 "auth": { 00:11:57.269 "state": "completed", 00:11:57.269 "digest": "sha512", 00:11:57.269 "dhgroup": "ffdhe2048" 00:11:57.269 } 00:11:57.269 } 00:11:57.269 ]' 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.269 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.527 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.527 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.527 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.527 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.527 07:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.786 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:58.352 07:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.611 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.870 00:11:58.870 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.870 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.870 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.129 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.129 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.129 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.129 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.129 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.129 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.129 { 00:11:59.129 "cntlid": 107, 00:11:59.129 "qid": 0, 00:11:59.129 "state": "enabled", 00:11:59.129 "thread": "nvmf_tgt_poll_group_000", 00:11:59.129 "listen_address": { 00:11:59.129 "trtype": "TCP", 00:11:59.129 "adrfam": "IPv4", 00:11:59.129 "traddr": "10.0.0.2", 00:11:59.129 "trsvcid": "4420" 00:11:59.129 }, 00:11:59.129 "peer_address": { 00:11:59.129 "trtype": "TCP", 00:11:59.129 "adrfam": "IPv4", 00:11:59.129 "traddr": "10.0.0.1", 00:11:59.129 "trsvcid": "44606" 00:11:59.129 }, 00:11:59.129 "auth": { 00:11:59.129 "state": "completed", 00:11:59.129 "digest": "sha512", 00:11:59.129 "dhgroup": "ffdhe2048" 00:11:59.129 } 00:11:59.129 } 00:11:59.129 ]' 00:11:59.130 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.130 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.130 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.388 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.388 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.388 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.388 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.388 07:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.647 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.215 07:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.473 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.474 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.732 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.732 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.732 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.991 00:12:00.991 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.991 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.991 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.250 { 00:12:01.250 "cntlid": 109, 00:12:01.250 "qid": 0, 00:12:01.250 "state": "enabled", 00:12:01.250 "thread": "nvmf_tgt_poll_group_000", 00:12:01.250 "listen_address": { 00:12:01.250 "trtype": "TCP", 00:12:01.250 "adrfam": "IPv4", 00:12:01.250 "traddr": "10.0.0.2", 00:12:01.250 "trsvcid": "4420" 00:12:01.250 }, 00:12:01.250 "peer_address": { 00:12:01.250 "trtype": "TCP", 00:12:01.250 "adrfam": "IPv4", 00:12:01.250 "traddr": "10.0.0.1", 00:12:01.250 "trsvcid": "44620" 00:12:01.250 }, 00:12:01.250 "auth": { 00:12:01.250 "state": "completed", 00:12:01.250 "digest": "sha512", 00:12:01.250 "dhgroup": "ffdhe2048" 00:12:01.250 } 00:12:01.250 } 00:12:01.250 ]' 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.250 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.509 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:02.444 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.445 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:02.445 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.445 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.445 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.445 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.445 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.703 00:12:02.703 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.703 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.703 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.270 { 00:12:03.270 "cntlid": 111, 00:12:03.270 "qid": 0, 00:12:03.270 "state": "enabled", 00:12:03.270 "thread": "nvmf_tgt_poll_group_000", 00:12:03.270 "listen_address": { 00:12:03.270 "trtype": "TCP", 00:12:03.270 "adrfam": "IPv4", 00:12:03.270 "traddr": "10.0.0.2", 00:12:03.270 "trsvcid": "4420" 00:12:03.270 }, 00:12:03.270 "peer_address": { 00:12:03.270 "trtype": "TCP", 00:12:03.270 "adrfam": "IPv4", 00:12:03.270 "traddr": "10.0.0.1", 00:12:03.270 "trsvcid": "44638" 00:12:03.270 }, 00:12:03.270 "auth": { 00:12:03.270 "state": "completed", 00:12:03.270 "digest": "sha512", 00:12:03.270 "dhgroup": "ffdhe2048" 00:12:03.270 } 00:12:03.270 } 00:12:03.270 ]' 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.270 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.528 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:04.094 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.352 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.615 00:12:04.615 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.615 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.615 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.874 { 00:12:04.874 "cntlid": 113, 00:12:04.874 "qid": 0, 00:12:04.874 "state": "enabled", 00:12:04.874 "thread": "nvmf_tgt_poll_group_000", 00:12:04.874 "listen_address": { 00:12:04.874 "trtype": "TCP", 00:12:04.874 "adrfam": "IPv4", 00:12:04.874 "traddr": "10.0.0.2", 00:12:04.874 "trsvcid": "4420" 00:12:04.874 }, 00:12:04.874 "peer_address": { 00:12:04.874 "trtype": "TCP", 00:12:04.874 "adrfam": "IPv4", 00:12:04.874 "traddr": "10.0.0.1", 00:12:04.874 "trsvcid": "45790" 00:12:04.874 }, 00:12:04.874 "auth": { 00:12:04.874 "state": "completed", 00:12:04.874 "digest": "sha512", 00:12:04.874 "dhgroup": "ffdhe3072" 00:12:04.874 } 00:12:04.874 } 00:12:04.874 ]' 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.874 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.133 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.133 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.133 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.133 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.133 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.391 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.959 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.217 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.475 00:12:06.475 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.475 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.475 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.733 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.733 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.733 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.734 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.734 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.734 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.734 { 00:12:06.734 "cntlid": 115, 00:12:06.734 "qid": 0, 00:12:06.734 "state": "enabled", 00:12:06.734 "thread": "nvmf_tgt_poll_group_000", 00:12:06.734 "listen_address": { 00:12:06.734 "trtype": "TCP", 00:12:06.734 "adrfam": "IPv4", 00:12:06.734 "traddr": "10.0.0.2", 00:12:06.734 "trsvcid": "4420" 00:12:06.734 }, 00:12:06.734 "peer_address": { 00:12:06.734 "trtype": "TCP", 00:12:06.734 "adrfam": "IPv4", 00:12:06.734 "traddr": "10.0.0.1", 00:12:06.734 "trsvcid": "45814" 00:12:06.734 }, 00:12:06.734 "auth": { 00:12:06.734 "state": "completed", 00:12:06.734 "digest": "sha512", 00:12:06.734 "dhgroup": "ffdhe3072" 00:12:06.734 } 00:12:06.734 } 00:12:06.734 ]' 00:12:06.734 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.991 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.249 07:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.816 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.074 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.333 00:12:08.333 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.333 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.333 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.904 { 00:12:08.904 "cntlid": 117, 00:12:08.904 "qid": 0, 00:12:08.904 "state": "enabled", 00:12:08.904 "thread": "nvmf_tgt_poll_group_000", 00:12:08.904 "listen_address": { 00:12:08.904 "trtype": "TCP", 00:12:08.904 "adrfam": "IPv4", 00:12:08.904 "traddr": "10.0.0.2", 00:12:08.904 "trsvcid": "4420" 00:12:08.904 }, 00:12:08.904 "peer_address": { 00:12:08.904 "trtype": "TCP", 00:12:08.904 "adrfam": "IPv4", 00:12:08.904 "traddr": "10.0.0.1", 00:12:08.904 "trsvcid": "45832" 00:12:08.904 }, 00:12:08.904 "auth": { 00:12:08.904 "state": "completed", 00:12:08.904 "digest": "sha512", 00:12:08.904 "dhgroup": "ffdhe3072" 00:12:08.904 } 00:12:08.904 } 00:12:08.904 ]' 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.904 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.186 07:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:09.765 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.023 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.281 00:12:10.281 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.281 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.281 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.539 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.539 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.539 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.539 07:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.539 { 00:12:10.539 "cntlid": 119, 00:12:10.539 "qid": 0, 00:12:10.539 "state": "enabled", 00:12:10.539 "thread": "nvmf_tgt_poll_group_000", 00:12:10.539 "listen_address": { 00:12:10.539 "trtype": "TCP", 00:12:10.539 "adrfam": "IPv4", 00:12:10.539 "traddr": "10.0.0.2", 00:12:10.539 "trsvcid": "4420" 00:12:10.539 }, 00:12:10.539 "peer_address": { 00:12:10.539 "trtype": "TCP", 00:12:10.539 "adrfam": "IPv4", 00:12:10.539 "traddr": "10.0.0.1", 00:12:10.539 "trsvcid": "45864" 00:12:10.539 }, 00:12:10.539 "auth": { 00:12:10.539 "state": "completed", 00:12:10.539 "digest": "sha512", 00:12:10.539 "dhgroup": "ffdhe3072" 00:12:10.539 } 00:12:10.539 } 00:12:10.539 ]' 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.539 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.105 07:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:11.672 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.673 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.240 00:12:12.240 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.240 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.240 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.497 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.497 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.497 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.497 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.497 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.497 { 00:12:12.497 "cntlid": 121, 00:12:12.497 "qid": 0, 00:12:12.497 "state": "enabled", 00:12:12.497 "thread": "nvmf_tgt_poll_group_000", 00:12:12.497 "listen_address": { 00:12:12.497 "trtype": "TCP", 00:12:12.497 "adrfam": "IPv4", 00:12:12.497 "traddr": "10.0.0.2", 00:12:12.497 "trsvcid": "4420" 00:12:12.497 }, 00:12:12.497 "peer_address": { 00:12:12.497 "trtype": "TCP", 00:12:12.497 "adrfam": "IPv4", 00:12:12.497 "traddr": "10.0.0.1", 00:12:12.497 "trsvcid": "45892" 00:12:12.497 }, 00:12:12.497 "auth": { 00:12:12.497 "state": "completed", 00:12:12.497 "digest": "sha512", 00:12:12.497 "dhgroup": "ffdhe4096" 00:12:12.498 } 00:12:12.498 } 00:12:12.498 ]' 00:12:12.498 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.498 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.498 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.498 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.498 07:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.498 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.498 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.498 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.756 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.322 07:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.579 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.145 00:12:14.145 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.145 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.145 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.403 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.403 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.404 { 00:12:14.404 "cntlid": 123, 00:12:14.404 "qid": 0, 00:12:14.404 "state": "enabled", 00:12:14.404 "thread": "nvmf_tgt_poll_group_000", 00:12:14.404 "listen_address": { 00:12:14.404 "trtype": "TCP", 00:12:14.404 "adrfam": "IPv4", 00:12:14.404 "traddr": "10.0.0.2", 00:12:14.404 "trsvcid": "4420" 00:12:14.404 }, 00:12:14.404 "peer_address": { 00:12:14.404 "trtype": "TCP", 00:12:14.404 "adrfam": "IPv4", 00:12:14.404 "traddr": "10.0.0.1", 00:12:14.404 "trsvcid": "36202" 00:12:14.404 }, 00:12:14.404 "auth": { 00:12:14.404 "state": "completed", 00:12:14.404 "digest": "sha512", 00:12:14.404 "dhgroup": "ffdhe4096" 00:12:14.404 } 00:12:14.404 } 00:12:14.404 ]' 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.404 07:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.662 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.593 07:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.593 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.158 00:12:16.158 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.158 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.158 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.416 { 00:12:16.416 "cntlid": 125, 00:12:16.416 "qid": 0, 00:12:16.416 "state": "enabled", 00:12:16.416 "thread": "nvmf_tgt_poll_group_000", 00:12:16.416 "listen_address": { 00:12:16.416 "trtype": "TCP", 00:12:16.416 "adrfam": "IPv4", 00:12:16.416 "traddr": "10.0.0.2", 00:12:16.416 "trsvcid": "4420" 00:12:16.416 }, 00:12:16.416 "peer_address": { 00:12:16.416 "trtype": "TCP", 00:12:16.416 "adrfam": "IPv4", 00:12:16.416 "traddr": "10.0.0.1", 00:12:16.416 "trsvcid": "36232" 00:12:16.416 }, 00:12:16.416 "auth": { 00:12:16.416 "state": "completed", 00:12:16.416 "digest": "sha512", 00:12:16.416 "dhgroup": "ffdhe4096" 00:12:16.416 } 00:12:16.416 } 00:12:16.416 ]' 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.416 07:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.675 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.610 07:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.610 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.868 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.126 { 00:12:18.126 "cntlid": 127, 00:12:18.126 "qid": 0, 00:12:18.126 "state": "enabled", 00:12:18.126 "thread": "nvmf_tgt_poll_group_000", 00:12:18.126 "listen_address": { 00:12:18.126 "trtype": "TCP", 00:12:18.126 "adrfam": "IPv4", 00:12:18.126 "traddr": "10.0.0.2", 00:12:18.126 "trsvcid": "4420" 00:12:18.126 }, 00:12:18.126 "peer_address": { 00:12:18.126 "trtype": "TCP", 00:12:18.126 "adrfam": "IPv4", 00:12:18.126 "traddr": "10.0.0.1", 00:12:18.126 "trsvcid": "36260" 00:12:18.126 }, 00:12:18.126 "auth": { 00:12:18.126 "state": "completed", 00:12:18.126 "digest": "sha512", 00:12:18.126 "dhgroup": "ffdhe4096" 00:12:18.126 } 00:12:18.126 } 00:12:18.126 ]' 00:12:18.126 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.384 07:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.642 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:19.208 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.466 07:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.031 00:12:20.031 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.031 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.031 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.287 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.287 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.287 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.287 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.287 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.287 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.287 { 00:12:20.287 "cntlid": 129, 00:12:20.287 "qid": 0, 00:12:20.287 "state": "enabled", 00:12:20.287 "thread": "nvmf_tgt_poll_group_000", 00:12:20.287 "listen_address": { 00:12:20.287 "trtype": "TCP", 00:12:20.287 "adrfam": "IPv4", 00:12:20.287 "traddr": "10.0.0.2", 00:12:20.287 "trsvcid": "4420" 00:12:20.288 }, 00:12:20.288 "peer_address": { 00:12:20.288 "trtype": "TCP", 00:12:20.288 "adrfam": "IPv4", 00:12:20.288 "traddr": "10.0.0.1", 00:12:20.288 "trsvcid": "36280" 00:12:20.288 }, 00:12:20.288 "auth": { 00:12:20.288 "state": "completed", 00:12:20.288 "digest": "sha512", 00:12:20.288 "dhgroup": "ffdhe6144" 00:12:20.288 } 00:12:20.288 } 00:12:20.288 ]' 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.288 07:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.545 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.479 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.480 07:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.738 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.997 00:12:21.997 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.997 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.997 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.256 { 00:12:22.256 "cntlid": 131, 00:12:22.256 "qid": 0, 00:12:22.256 "state": "enabled", 00:12:22.256 "thread": "nvmf_tgt_poll_group_000", 00:12:22.256 "listen_address": { 00:12:22.256 "trtype": "TCP", 00:12:22.256 "adrfam": "IPv4", 00:12:22.256 "traddr": "10.0.0.2", 00:12:22.256 "trsvcid": "4420" 00:12:22.256 }, 00:12:22.256 "peer_address": { 00:12:22.256 "trtype": "TCP", 00:12:22.256 "adrfam": "IPv4", 00:12:22.256 "traddr": "10.0.0.1", 00:12:22.256 "trsvcid": "36308" 00:12:22.256 }, 00:12:22.256 "auth": { 00:12:22.256 "state": "completed", 00:12:22.256 "digest": "sha512", 00:12:22.256 "dhgroup": "ffdhe6144" 00:12:22.256 } 00:12:22.256 } 00:12:22.256 ]' 00:12:22.256 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.514 07:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.772 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.338 07:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.597 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.165 00:12:24.165 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.165 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.165 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.423 { 00:12:24.423 "cntlid": 133, 00:12:24.423 "qid": 0, 00:12:24.423 "state": "enabled", 00:12:24.423 "thread": "nvmf_tgt_poll_group_000", 00:12:24.423 "listen_address": { 00:12:24.423 "trtype": "TCP", 00:12:24.423 "adrfam": "IPv4", 00:12:24.423 "traddr": "10.0.0.2", 00:12:24.423 "trsvcid": "4420" 00:12:24.423 }, 00:12:24.423 "peer_address": { 00:12:24.423 "trtype": "TCP", 00:12:24.423 "adrfam": "IPv4", 00:12:24.423 "traddr": "10.0.0.1", 00:12:24.423 "trsvcid": "32944" 00:12:24.423 }, 00:12:24.423 "auth": { 00:12:24.423 "state": "completed", 00:12:24.423 "digest": "sha512", 00:12:24.423 "dhgroup": "ffdhe6144" 00:12:24.423 } 00:12:24.423 } 00:12:24.423 ]' 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.423 07:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.681 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:25.615 07:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.615 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.181 00:12:26.181 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.181 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.181 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.439 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.439 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.439 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.439 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.439 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.439 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.439 { 00:12:26.439 "cntlid": 135, 00:12:26.439 "qid": 0, 00:12:26.439 "state": "enabled", 00:12:26.439 "thread": "nvmf_tgt_poll_group_000", 00:12:26.439 "listen_address": { 00:12:26.439 "trtype": "TCP", 00:12:26.439 "adrfam": "IPv4", 00:12:26.439 "traddr": "10.0.0.2", 00:12:26.439 "trsvcid": "4420" 00:12:26.439 }, 00:12:26.439 "peer_address": { 00:12:26.439 "trtype": "TCP", 00:12:26.439 "adrfam": "IPv4", 00:12:26.439 "traddr": "10.0.0.1", 00:12:26.439 "trsvcid": "32982" 00:12:26.439 }, 00:12:26.439 "auth": { 00:12:26.439 "state": "completed", 00:12:26.439 "digest": "sha512", 00:12:26.440 "dhgroup": "ffdhe6144" 00:12:26.440 } 00:12:26.440 } 00:12:26.440 ]' 00:12:26.440 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.440 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.440 07:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.440 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.440 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.698 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.698 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.698 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.698 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:27.633 07:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.633 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.200 00:12:28.200 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.200 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.200 07:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.459 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.459 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.459 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.459 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.717 { 00:12:28.717 "cntlid": 137, 00:12:28.717 "qid": 0, 00:12:28.717 "state": "enabled", 00:12:28.717 "thread": "nvmf_tgt_poll_group_000", 00:12:28.717 "listen_address": { 00:12:28.717 "trtype": "TCP", 00:12:28.717 "adrfam": "IPv4", 00:12:28.717 "traddr": "10.0.0.2", 00:12:28.717 "trsvcid": "4420" 00:12:28.717 }, 00:12:28.717 "peer_address": { 00:12:28.717 "trtype": "TCP", 00:12:28.717 "adrfam": "IPv4", 00:12:28.717 "traddr": "10.0.0.1", 00:12:28.717 "trsvcid": "33004" 00:12:28.717 }, 00:12:28.717 "auth": { 00:12:28.717 "state": "completed", 00:12:28.717 "digest": "sha512", 00:12:28.717 "dhgroup": "ffdhe8192" 00:12:28.717 } 00:12:28.717 } 00:12:28.717 ]' 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.717 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.976 07:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:29.542 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.800 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.367 00:12:30.638 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.638 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.638 07:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.908 { 00:12:30.908 "cntlid": 139, 00:12:30.908 "qid": 0, 00:12:30.908 "state": "enabled", 00:12:30.908 "thread": "nvmf_tgt_poll_group_000", 00:12:30.908 "listen_address": { 00:12:30.908 "trtype": "TCP", 00:12:30.908 "adrfam": "IPv4", 00:12:30.908 "traddr": "10.0.0.2", 00:12:30.908 "trsvcid": "4420" 00:12:30.908 }, 00:12:30.908 "peer_address": { 00:12:30.908 "trtype": "TCP", 00:12:30.908 "adrfam": "IPv4", 00:12:30.908 "traddr": "10.0.0.1", 00:12:30.908 "trsvcid": "33032" 00:12:30.908 }, 00:12:30.908 "auth": { 00:12:30.908 "state": "completed", 00:12:30.908 "digest": "sha512", 00:12:30.908 "dhgroup": "ffdhe8192" 00:12:30.908 } 00:12:30.908 } 00:12:30.908 ]' 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.908 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.166 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:01:YzZmNDg0YzRmMWMxNjhhZmRiMDA4ZmM0YzI2MGYyNTZo4xV2: --dhchap-ctrl-secret DHHC-1:02:N2M0YzNlNjFjZmZlYjA3Y2YxYTFlM2E0MmM5MGYyOGQ3NDRhODM1ZjI5MWY0NGM0FQ/mQw==: 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.732 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.990 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.557 00:12:32.815 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.815 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.815 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.072 { 00:12:33.072 "cntlid": 141, 00:12:33.072 "qid": 0, 00:12:33.072 "state": "enabled", 00:12:33.072 "thread": "nvmf_tgt_poll_group_000", 00:12:33.072 "listen_address": { 00:12:33.072 "trtype": "TCP", 00:12:33.072 "adrfam": "IPv4", 00:12:33.072 "traddr": "10.0.0.2", 00:12:33.072 "trsvcid": "4420" 00:12:33.072 }, 00:12:33.072 "peer_address": { 00:12:33.072 "trtype": "TCP", 00:12:33.072 "adrfam": "IPv4", 00:12:33.072 "traddr": "10.0.0.1", 00:12:33.072 "trsvcid": "33050" 00:12:33.072 }, 00:12:33.072 "auth": { 00:12:33.072 "state": "completed", 00:12:33.072 "digest": "sha512", 00:12:33.072 "dhgroup": "ffdhe8192" 00:12:33.072 } 00:12:33.072 } 00:12:33.072 ]' 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.072 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.331 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:02:ZDQ0NWJjNzZhYTYxMGZhNzkwNzYyODE5ZGEwYmQyNDIxYWFlNWQ2MGQ5NDk4MGFiHBFIeQ==: --dhchap-ctrl-secret DHHC-1:01:NmIzM2IxZTI5N2NmZjU2ODU2MDc4OTc4ZGM3MzJmOTeZS3F4: 00:12:33.897 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.155 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.721 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.980 { 00:12:34.980 "cntlid": 143, 00:12:34.980 "qid": 0, 00:12:34.980 "state": "enabled", 00:12:34.980 "thread": "nvmf_tgt_poll_group_000", 00:12:34.980 "listen_address": { 00:12:34.980 "trtype": "TCP", 00:12:34.980 "adrfam": "IPv4", 00:12:34.980 "traddr": "10.0.0.2", 00:12:34.980 "trsvcid": "4420" 00:12:34.980 }, 00:12:34.980 "peer_address": { 00:12:34.980 "trtype": "TCP", 00:12:34.980 "adrfam": "IPv4", 00:12:34.980 "traddr": "10.0.0.1", 00:12:34.980 "trsvcid": "36714" 00:12:34.980 }, 00:12:34.980 "auth": { 00:12:34.980 "state": "completed", 00:12:34.980 "digest": "sha512", 00:12:34.980 "dhgroup": "ffdhe8192" 00:12:34.980 } 00:12:34.980 } 00:12:34.980 ]' 00:12:34.980 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.238 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.496 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:12:36.061 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.061 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:36.061 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.062 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:36.319 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.579 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.145 00:12:37.145 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.145 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.145 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.404 { 00:12:37.404 "cntlid": 145, 00:12:37.404 "qid": 0, 00:12:37.404 "state": "enabled", 00:12:37.404 "thread": "nvmf_tgt_poll_group_000", 00:12:37.404 "listen_address": { 00:12:37.404 "trtype": "TCP", 00:12:37.404 "adrfam": "IPv4", 00:12:37.404 "traddr": "10.0.0.2", 00:12:37.404 "trsvcid": "4420" 00:12:37.404 }, 00:12:37.404 "peer_address": { 00:12:37.404 "trtype": "TCP", 00:12:37.404 "adrfam": "IPv4", 00:12:37.404 "traddr": "10.0.0.1", 00:12:37.404 "trsvcid": "36746" 00:12:37.404 }, 00:12:37.404 "auth": { 00:12:37.404 "state": "completed", 00:12:37.404 "digest": "sha512", 00:12:37.404 "dhgroup": "ffdhe8192" 00:12:37.404 } 00:12:37.404 } 00:12:37.404 ]' 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.404 07:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.662 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:00:NTNhMDFjMzkwZGRhYjhlNmNhZTNlZjJlODJlYTg2M2EwM2U4MjY1OWY5MTQ4YzFizfhZBA==: --dhchap-ctrl-secret DHHC-1:03:ZTljY2IxY2UwMmFjMTdhNWUxMTUzYjc4NzRlNzU3NTlhZTlmMjkxNmZlMmUwMTEwNzMyYWNhNzZkOTMzOGI5MfxoXPY=: 00:12:38.236 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.236 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:38.236 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.236 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.236 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:38.237 07:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:38.802 request: 00:12:38.802 { 00:12:38.802 "name": "nvme0", 00:12:38.802 "trtype": "tcp", 00:12:38.802 "traddr": "10.0.0.2", 00:12:38.802 "adrfam": "ipv4", 00:12:38.802 "trsvcid": "4420", 00:12:38.802 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:38.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a", 00:12:38.802 "prchk_reftag": false, 00:12:38.802 "prchk_guard": false, 00:12:38.802 "hdgst": false, 00:12:38.802 "ddgst": false, 00:12:38.802 "dhchap_key": "key2", 00:12:38.802 "method": "bdev_nvme_attach_controller", 00:12:38.802 "req_id": 1 00:12:38.802 } 00:12:38.802 Got JSON-RPC error response 00:12:38.802 response: 00:12:38.802 { 00:12:38.802 "code": -5, 00:12:38.802 "message": "Input/output error" 00:12:38.802 } 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:39.060 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:39.626 request: 00:12:39.626 { 00:12:39.626 "name": "nvme0", 00:12:39.626 "trtype": "tcp", 00:12:39.626 "traddr": "10.0.0.2", 00:12:39.626 "adrfam": "ipv4", 00:12:39.626 "trsvcid": "4420", 00:12:39.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:39.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a", 00:12:39.626 "prchk_reftag": false, 00:12:39.626 "prchk_guard": false, 00:12:39.626 "hdgst": false, 00:12:39.626 "ddgst": false, 00:12:39.626 "dhchap_key": "key1", 00:12:39.626 "dhchap_ctrlr_key": "ckey2", 00:12:39.626 "method": "bdev_nvme_attach_controller", 00:12:39.626 "req_id": 1 00:12:39.626 } 00:12:39.626 Got JSON-RPC error response 00:12:39.626 response: 00:12:39.626 { 00:12:39.626 "code": -5, 00:12:39.626 "message": "Input/output error" 00:12:39.626 } 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key1 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.626 07:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.626 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.192 request: 00:12:40.192 { 00:12:40.192 "name": "nvme0", 00:12:40.192 "trtype": "tcp", 00:12:40.192 "traddr": "10.0.0.2", 00:12:40.192 "adrfam": "ipv4", 00:12:40.192 "trsvcid": "4420", 00:12:40.192 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:40.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a", 00:12:40.192 "prchk_reftag": false, 00:12:40.192 "prchk_guard": false, 00:12:40.192 "hdgst": false, 00:12:40.192 "ddgst": false, 00:12:40.192 "dhchap_key": "key1", 00:12:40.192 "dhchap_ctrlr_key": "ckey1", 00:12:40.192 "method": "bdev_nvme_attach_controller", 00:12:40.192 "req_id": 1 00:12:40.192 } 00:12:40.192 Got JSON-RPC error response 00:12:40.192 response: 00:12:40.192 { 00:12:40.192 "code": -5, 00:12:40.192 "message": "Input/output error" 00:12:40.192 } 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68743 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68743 ']' 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68743 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68743 00:12:40.192 killing process with pid 68743 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68743' 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68743 00:12:40.192 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68743 00:12:40.450 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:40.450 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71704 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71704 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71704 ']' 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.451 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.386 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.386 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:41.386 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.386 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.386 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.644 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.644 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:41.644 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71704 00:12:41.644 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71704 ']' 00:12:41.645 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.645 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.645 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.645 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.645 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.645 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.645 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:41.645 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:41.645 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.645 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.903 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.469 00:12:42.469 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.469 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.469 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.727 { 00:12:42.727 "cntlid": 1, 00:12:42.727 "qid": 0, 00:12:42.727 "state": "enabled", 00:12:42.727 "thread": "nvmf_tgt_poll_group_000", 00:12:42.727 "listen_address": { 00:12:42.727 "trtype": "TCP", 00:12:42.727 "adrfam": "IPv4", 00:12:42.727 "traddr": "10.0.0.2", 00:12:42.727 "trsvcid": "4420" 00:12:42.727 }, 00:12:42.727 "peer_address": { 00:12:42.727 "trtype": "TCP", 00:12:42.727 "adrfam": "IPv4", 00:12:42.727 "traddr": "10.0.0.1", 00:12:42.727 "trsvcid": "36792" 00:12:42.727 }, 00:12:42.727 "auth": { 00:12:42.727 "state": "completed", 00:12:42.727 "digest": "sha512", 00:12:42.727 "dhgroup": "ffdhe8192" 00:12:42.727 } 00:12:42.727 } 00:12:42.727 ]' 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.727 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.728 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.986 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.986 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.986 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.986 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.986 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.244 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid 437e2608-a818-4ddb-8068-388d756b599a --dhchap-secret DHHC-1:03:MWRmYmVhODZiOTllNGYyZjNiMzhkNTFhNWZlZWYzZjE3NWExM2I2OTExZjcyNWU4Y2ZhMDdmMGViNjY5NjgwMwE9QUQ=: 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --dhchap-key key3 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:43.811 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.070 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.329 request: 00:12:44.329 { 00:12:44.329 "name": "nvme0", 00:12:44.329 "trtype": "tcp", 00:12:44.329 "traddr": "10.0.0.2", 00:12:44.329 "adrfam": "ipv4", 00:12:44.329 "trsvcid": "4420", 00:12:44.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:44.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a", 00:12:44.329 "prchk_reftag": false, 00:12:44.329 "prchk_guard": false, 00:12:44.329 "hdgst": false, 00:12:44.329 "ddgst": false, 00:12:44.329 "dhchap_key": "key3", 00:12:44.329 "method": "bdev_nvme_attach_controller", 00:12:44.329 "req_id": 1 00:12:44.329 } 00:12:44.329 Got JSON-RPC error response 00:12:44.329 response: 00:12:44.329 { 00:12:44.329 "code": -5, 00:12:44.329 "message": "Input/output error" 00:12:44.329 } 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:44.329 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:44.895 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.896 request: 00:12:44.896 { 00:12:44.896 "name": "nvme0", 00:12:44.896 "trtype": "tcp", 00:12:44.896 "traddr": "10.0.0.2", 00:12:44.896 "adrfam": "ipv4", 00:12:44.896 "trsvcid": "4420", 00:12:44.896 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:44.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a", 00:12:44.896 "prchk_reftag": false, 00:12:44.896 "prchk_guard": false, 00:12:44.896 "hdgst": false, 00:12:44.896 "ddgst": false, 00:12:44.896 "dhchap_key": "key3", 00:12:44.896 "method": "bdev_nvme_attach_controller", 00:12:44.896 "req_id": 1 00:12:44.896 } 00:12:44.896 Got JSON-RPC error response 00:12:44.896 response: 00:12:44.896 { 00:12:44.896 "code": -5, 00:12:44.896 "message": "Input/output error" 00:12:44.896 } 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:44.896 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:45.155 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:45.413 request: 00:12:45.413 { 00:12:45.413 "name": "nvme0", 00:12:45.413 "trtype": "tcp", 00:12:45.413 "traddr": "10.0.0.2", 00:12:45.413 "adrfam": "ipv4", 00:12:45.413 "trsvcid": "4420", 00:12:45.413 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:45.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a", 00:12:45.413 "prchk_reftag": false, 00:12:45.413 "prchk_guard": false, 00:12:45.413 "hdgst": false, 00:12:45.413 "ddgst": false, 00:12:45.413 "dhchap_key": "key0", 00:12:45.413 "dhchap_ctrlr_key": "key1", 00:12:45.413 "method": "bdev_nvme_attach_controller", 00:12:45.413 "req_id": 1 00:12:45.413 } 00:12:45.413 Got JSON-RPC error response 00:12:45.413 response: 00:12:45.413 { 00:12:45.413 "code": -5, 00:12:45.413 "message": "Input/output error" 00:12:45.413 } 00:12:45.413 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:45.413 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:45.413 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:45.413 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:45.413 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:45.413 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:45.981 00:12:45.981 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:45.981 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.981 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:45.981 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.981 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.981 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68775 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68775 ']' 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68775 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68775 00:12:46.239 killing process with pid 68775 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68775' 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68775 00:12:46.239 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68775 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.804 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.804 rmmod nvme_tcp 00:12:47.061 rmmod nvme_fabrics 00:12:47.061 rmmod nvme_keyring 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71704 ']' 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71704 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71704 ']' 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71704 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71704 00:12:47.061 killing process with pid 71704 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71704' 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71704 00:12:47.061 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71704 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Zj9 /tmp/spdk.key-sha256.CUt /tmp/spdk.key-sha384.eZS /tmp/spdk.key-sha512.3rF /tmp/spdk.key-sha512.Ccx /tmp/spdk.key-sha384.HHN /tmp/spdk.key-sha256.yiI '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:47.319 00:12:47.319 real 2m41.414s 00:12:47.319 user 6m25.712s 00:12:47.319 sys 0m25.822s 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.319 ************************************ 00:12:47.319 END TEST nvmf_auth_target 00:12:47.319 ************************************ 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.319 ************************************ 00:12:47.319 START TEST nvmf_bdevio_no_huge 00:12:47.319 ************************************ 00:12:47.319 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:47.577 * Looking for test storage... 00:12:47.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.577 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:47.578 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:47.578 Cannot find device "nvmf_tgt_br" 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.578 Cannot find device "nvmf_tgt_br2" 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:47.578 Cannot find device "nvmf_tgt_br" 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:47.578 Cannot find device "nvmf_tgt_br2" 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:47.578 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:47.837 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:47.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:47.838 00:12:47.838 --- 10.0.0.2 ping statistics --- 00:12:47.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.838 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:47.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:47.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:12:47.838 00:12:47.838 --- 10.0.0.3 ping statistics --- 00:12:47.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.838 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:47.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:47.838 00:12:47.838 --- 10.0.0.1 ping statistics --- 00:12:47.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.838 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.838 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72024 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72024 00:12:47.839 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:47.840 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 72024 ']' 00:12:47.840 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.840 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.840 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.840 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.840 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.840 [2024-07-26 07:38:13.397532] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:12:47.840 [2024-07-26 07:38:13.397659] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:48.100 [2024-07-26 07:38:13.540109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.100 [2024-07-26 07:38:13.699987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.100 [2024-07-26 07:38:13.700237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.100 [2024-07-26 07:38:13.700407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.100 [2024-07-26 07:38:13.700640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.100 [2024-07-26 07:38:13.700656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.358 [2024-07-26 07:38:13.700837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:48.358 [2024-07-26 07:38:13.700962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:48.358 [2024-07-26 07:38:13.702504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:48.358 [2024-07-26 07:38:13.702576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.358 [2024-07-26 07:38:13.707940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 [2024-07-26 07:38:14.336355] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 Malloc0 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 [2024-07-26 07:38:14.377331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:48.924 { 00:12:48.924 "params": { 00:12:48.924 "name": "Nvme$subsystem", 00:12:48.924 "trtype": "$TEST_TRANSPORT", 00:12:48.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:48.924 "adrfam": "ipv4", 00:12:48.924 "trsvcid": "$NVMF_PORT", 00:12:48.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:48.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:48.924 "hdgst": ${hdgst:-false}, 00:12:48.924 "ddgst": ${ddgst:-false} 00:12:48.924 }, 00:12:48.924 "method": "bdev_nvme_attach_controller" 00:12:48.924 } 00:12:48.924 EOF 00:12:48.924 )") 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:48.924 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:48.924 "params": { 00:12:48.924 "name": "Nvme1", 00:12:48.924 "trtype": "tcp", 00:12:48.925 "traddr": "10.0.0.2", 00:12:48.925 "adrfam": "ipv4", 00:12:48.925 "trsvcid": "4420", 00:12:48.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:48.925 "hdgst": false, 00:12:48.925 "ddgst": false 00:12:48.925 }, 00:12:48.925 "method": "bdev_nvme_attach_controller" 00:12:48.925 }' 00:12:48.925 [2024-07-26 07:38:14.436657] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:12:48.925 [2024-07-26 07:38:14.437121] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72060 ] 00:12:49.183 [2024-07-26 07:38:14.582595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:49.183 [2024-07-26 07:38:14.739237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.183 [2024-07-26 07:38:14.739368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.183 [2024-07-26 07:38:14.739373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.183 [2024-07-26 07:38:14.754409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:49.442 I/O targets: 00:12:49.442 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:49.442 00:12:49.442 00:12:49.442 CUnit - A unit testing framework for C - Version 2.1-3 00:12:49.442 http://cunit.sourceforge.net/ 00:12:49.442 00:12:49.442 00:12:49.442 Suite: bdevio tests on: Nvme1n1 00:12:49.442 Test: blockdev write read block ...passed 00:12:49.442 Test: blockdev write zeroes read block ...passed 00:12:49.442 Test: blockdev write zeroes read no split ...passed 00:12:49.442 Test: blockdev write zeroes read split ...passed 00:12:49.442 Test: blockdev write zeroes read split partial ...passed 00:12:49.442 Test: blockdev reset ...[2024-07-26 07:38:14.963735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:49.442 [2024-07-26 07:38:14.964010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0d870 (9): Bad file descriptor 00:12:49.442 [2024-07-26 07:38:14.979608] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:49.442 passed 00:12:49.442 Test: blockdev write read 8 blocks ...passed 00:12:49.442 Test: blockdev write read size > 128k ...passed 00:12:49.442 Test: blockdev write read invalid size ...passed 00:12:49.442 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.442 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.442 Test: blockdev write read max offset ...passed 00:12:49.442 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.442 Test: blockdev writev readv 8 blocks ...passed 00:12:49.442 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.442 Test: blockdev writev readv block ...passed 00:12:49.442 Test: blockdev writev readv size > 128k ...passed 00:12:49.442 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.442 Test: blockdev comparev and writev ...[2024-07-26 07:38:14.990932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.990985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.991011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.991024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.991335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.991356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.991377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.991389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.991692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.992029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.992069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.992386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.992413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.992434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:49.442 [2024-07-26 07:38:14.992447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:49.442 passed 00:12:49.442 Test: blockdev nvme passthru rw ...passed 00:12:49.442 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.442 Test: blockdev nvme admin passthru ...[2024-07-26 07:38:14.994089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:49.442 [2024-07-26 07:38:14.994132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.994255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:49.442 [2024-07-26 07:38:14.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.994391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:49.442 [2024-07-26 07:38:14.994410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:49.442 [2024-07-26 07:38:14.994535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:49.442 [2024-07-26 07:38:14.994556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:49.442 passed 00:12:49.442 Test: blockdev copy ...passed 00:12:49.442 00:12:49.442 Run Summary: Type Total Ran Passed Failed Inactive 00:12:49.442 suites 1 1 n/a 0 0 00:12:49.442 tests 23 23 23 0 0 00:12:49.442 asserts 152 152 152 0 n/a 00:12:49.442 00:12:49.442 Elapsed time = 0.178 seconds 00:12:50.009 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.009 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.009 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:50.009 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.010 rmmod nvme_tcp 00:12:50.010 rmmod nvme_fabrics 00:12:50.010 rmmod nvme_keyring 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72024 ']' 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72024 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 72024 ']' 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 72024 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72024 00:12:50.010 killing process with pid 72024 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72024' 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 72024 00:12:50.010 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 72024 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.605 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:50.605 ************************************ 00:12:50.605 END TEST nvmf_bdevio_no_huge 00:12:50.605 ************************************ 00:12:50.605 00:12:50.605 real 0m3.166s 00:12:50.605 user 0m10.203s 00:12:50.605 sys 0m1.285s 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.605 ************************************ 00:12:50.605 START TEST nvmf_tls 00:12:50.605 ************************************ 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:50.605 * Looking for test storage... 00:12:50.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.605 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.606 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:50.867 Cannot find device "nvmf_tgt_br" 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.867 Cannot find device "nvmf_tgt_br2" 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:50.867 Cannot find device "nvmf_tgt_br" 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:50.867 Cannot find device "nvmf_tgt_br2" 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:50.867 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:51.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:12:51.125 00:12:51.125 --- 10.0.0.2 ping statistics --- 00:12:51.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.125 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:51.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:51.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:12:51.125 00:12:51.125 --- 10.0.0.3 ping statistics --- 00:12:51.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.125 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:51.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:51.125 00:12:51.125 --- 10.0.0.1 ping statistics --- 00:12:51.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.125 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.125 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72243 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72243 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72243 ']' 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.126 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.126 [2024-07-26 07:38:16.603074] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:12:51.126 [2024-07-26 07:38:16.603176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.384 [2024-07-26 07:38:16.747224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.384 [2024-07-26 07:38:16.880019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.384 [2024-07-26 07:38:16.880099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.384 [2024-07-26 07:38:16.880113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.384 [2024-07-26 07:38:16.880124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.384 [2024-07-26 07:38:16.880134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.384 [2024-07-26 07:38:16.880168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.950 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.950 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:51.950 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.950 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.950 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:52.208 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.208 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:52.208 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:52.466 true 00:12:52.466 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.466 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:52.725 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:52.725 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:52.725 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:52.983 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:52.983 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.240 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:53.240 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:53.240 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:53.498 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.498 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:53.756 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:53.756 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:53.756 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.756 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:54.015 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:54.015 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:54.015 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:54.273 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:54.273 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:54.273 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:54.273 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:54.273 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:54.531 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:54.531 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:54.790 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.P4yT23dagx 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.rdsDKaCaxa 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.P4yT23dagx 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.rdsDKaCaxa 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:55.049 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:55.308 [2024-07-26 07:38:20.904949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:55.566 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.P4yT23dagx 00:12:55.566 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.P4yT23dagx 00:12:55.566 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:55.825 [2024-07-26 07:38:21.223505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.825 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:56.083 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:56.083 [2024-07-26 07:38:21.651613] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:56.083 [2024-07-26 07:38:21.651873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.083 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:56.340 malloc0 00:12:56.340 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:56.598 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.P4yT23dagx 00:12:56.856 [2024-07-26 07:38:22.318038] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:56.856 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.P4yT23dagx 00:13:09.053 Initializing NVMe Controllers 00:13:09.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:09.053 Initialization complete. Launching workers. 00:13:09.053 ======================================================== 00:13:09.053 Latency(us) 00:13:09.053 Device Information : IOPS MiB/s Average min max 00:13:09.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10069.57 39.33 6357.17 1447.16 8774.29 00:13:09.053 ======================================================== 00:13:09.053 Total : 10069.57 39.33 6357.17 1447.16 8774.29 00:13:09.053 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P4yT23dagx 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.P4yT23dagx' 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72468 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72468 /var/tmp/bdevperf.sock 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72468 ']' 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:09.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.053 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.053 [2024-07-26 07:38:32.591875] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:09.053 [2024-07-26 07:38:32.592160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72468 ] 00:13:09.053 [2024-07-26 07:38:32.733620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.053 [2024-07-26 07:38:32.848568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.053 [2024-07-26 07:38:32.924757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:09.053 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.053 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:09.053 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.P4yT23dagx 00:13:09.053 [2024-07-26 07:38:33.770950] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:09.053 [2024-07-26 07:38:33.771080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:09.053 TLSTESTn1 00:13:09.053 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:09.053 Running I/O for 10 seconds... 00:13:19.022 00:13:19.022 Latency(us) 00:13:19.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.022 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:19.022 Verification LBA range: start 0x0 length 0x2000 00:13:19.022 TLSTESTn1 : 10.01 4266.54 16.67 0.00 0.00 29946.50 5600.35 30265.72 00:13:19.022 =================================================================================================================== 00:13:19.022 Total : 4266.54 16.67 0.00 0.00 29946.50 5600.35 30265.72 00:13:19.022 0 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72468 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72468 ']' 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72468 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72468 00:13:19.022 killing process with pid 72468 00:13:19.022 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.022 00:13:19.022 Latency(us) 00:13:19.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.022 =================================================================================================================== 00:13:19.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72468' 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72468 00:13:19.022 [2024-07-26 07:38:44.052680] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72468 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rdsDKaCaxa 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rdsDKaCaxa 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rdsDKaCaxa 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rdsDKaCaxa' 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72602 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72602 /var/tmp/bdevperf.sock 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72602 ']' 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.022 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.022 [2024-07-26 07:38:44.403138] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:19.022 [2024-07-26 07:38:44.403591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72602 ] 00:13:19.022 [2024-07-26 07:38:44.540160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.280 [2024-07-26 07:38:44.654612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.280 [2024-07-26 07:38:44.727102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.846 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.846 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:19.846 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rdsDKaCaxa 00:13:20.104 [2024-07-26 07:38:45.620725] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.104 [2024-07-26 07:38:45.620866] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:20.104 [2024-07-26 07:38:45.626311] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:20.104 [2024-07-26 07:38:45.626764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19831f0 (107): Transport endpoint is not connected 00:13:20.104 [2024-07-26 07:38:45.627749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19831f0 (9): Bad file descriptor 00:13:20.104 [2024-07-26 07:38:45.628745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:20.104 [2024-07-26 07:38:45.628763] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:20.104 [2024-07-26 07:38:45.628778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:20.104 request: 00:13:20.104 { 00:13:20.104 "name": "TLSTEST", 00:13:20.104 "trtype": "tcp", 00:13:20.104 "traddr": "10.0.0.2", 00:13:20.104 "adrfam": "ipv4", 00:13:20.104 "trsvcid": "4420", 00:13:20.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.104 "prchk_reftag": false, 00:13:20.104 "prchk_guard": false, 00:13:20.104 "hdgst": false, 00:13:20.104 "ddgst": false, 00:13:20.104 "psk": "/tmp/tmp.rdsDKaCaxa", 00:13:20.104 "method": "bdev_nvme_attach_controller", 00:13:20.104 "req_id": 1 00:13:20.104 } 00:13:20.104 Got JSON-RPC error response 00:13:20.104 response: 00:13:20.104 { 00:13:20.104 "code": -5, 00:13:20.104 "message": "Input/output error" 00:13:20.104 } 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72602 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72602 ']' 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72602 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72602 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72602' 00:13:20.104 killing process with pid 72602 00:13:20.104 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72602 00:13:20.104 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.104 00:13:20.104 Latency(us) 00:13:20.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.105 =================================================================================================================== 00:13:20.105 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.105 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72602 00:13:20.105 [2024-07-26 07:38:45.688498] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:20.671 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:20.671 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:20.671 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:20.671 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:20.671 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:20.671 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.P4yT23dagx 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.P4yT23dagx 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.P4yT23dagx 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.P4yT23dagx' 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72635 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72635 /var/tmp/bdevperf.sock 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72635 ']' 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.672 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 [2024-07-26 07:38:46.049718] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:20.672 [2024-07-26 07:38:46.049814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72635 ] 00:13:20.672 [2024-07-26 07:38:46.189913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.930 [2024-07-26 07:38:46.312969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.930 [2024-07-26 07:38:46.386189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.497 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.497 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:21.497 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.P4yT23dagx 00:13:21.756 [2024-07-26 07:38:47.279870] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:21.756 [2024-07-26 07:38:47.280016] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:21.756 [2024-07-26 07:38:47.285415] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:21.756 [2024-07-26 07:38:47.285459] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:21.756 [2024-07-26 07:38:47.285525] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:21.756 [2024-07-26 07:38:47.285911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58f1f0 (107): Transport endpoint is not connected 00:13:21.756 [2024-07-26 07:38:47.286900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58f1f0 (9): Bad file descriptor 00:13:21.756 [2024-07-26 07:38:47.287895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:21.756 [2024-07-26 07:38:47.287925] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:21.756 [2024-07-26 07:38:47.287957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:21.756 request: 00:13:21.756 { 00:13:21.756 "name": "TLSTEST", 00:13:21.756 "trtype": "tcp", 00:13:21.756 "traddr": "10.0.0.2", 00:13:21.756 "adrfam": "ipv4", 00:13:21.756 "trsvcid": "4420", 00:13:21.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.756 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:21.756 "prchk_reftag": false, 00:13:21.756 "prchk_guard": false, 00:13:21.756 "hdgst": false, 00:13:21.756 "ddgst": false, 00:13:21.756 "psk": "/tmp/tmp.P4yT23dagx", 00:13:21.756 "method": "bdev_nvme_attach_controller", 00:13:21.756 "req_id": 1 00:13:21.756 } 00:13:21.756 Got JSON-RPC error response 00:13:21.756 response: 00:13:21.756 { 00:13:21.756 "code": -5, 00:13:21.756 "message": "Input/output error" 00:13:21.756 } 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72635 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72635 ']' 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72635 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72635 00:13:21.756 killing process with pid 72635 00:13:21.756 Received shutdown signal, test time was about 10.000000 seconds 00:13:21.756 00:13:21.756 Latency(us) 00:13:21.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.756 =================================================================================================================== 00:13:21.756 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72635' 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72635 00:13:21.756 [2024-07-26 07:38:47.334326] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:21.756 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72635 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.P4yT23dagx 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.P4yT23dagx 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.P4yT23dagx 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.P4yT23dagx' 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72657 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72657 /var/tmp/bdevperf.sock 00:13:22.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72657 ']' 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.323 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.323 [2024-07-26 07:38:47.692694] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:22.323 [2024-07-26 07:38:47.692790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 00:13:22.323 [2024-07-26 07:38:47.831225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.581 [2024-07-26 07:38:47.973108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.581 [2024-07-26 07:38:48.048184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.147 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.147 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:23.147 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.P4yT23dagx 00:13:23.405 [2024-07-26 07:38:48.913981] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.405 [2024-07-26 07:38:48.914112] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:23.405 [2024-07-26 07:38:48.925713] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:23.405 [2024-07-26 07:38:48.925752] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:23.405 [2024-07-26 07:38:48.925820] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:23.405 [2024-07-26 07:38:48.926024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f91f0 (107): Transport endpoint is not connected 00:13:23.405 [2024-07-26 07:38:48.927028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f91f0 (9): Bad file descriptor 00:13:23.405 [2024-07-26 07:38:48.928023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:23.405 [2024-07-26 07:38:48.928053] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:23.405 [2024-07-26 07:38:48.928086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:23.405 request: 00:13:23.405 { 00:13:23.405 "name": "TLSTEST", 00:13:23.405 "trtype": "tcp", 00:13:23.405 "traddr": "10.0.0.2", 00:13:23.405 "adrfam": "ipv4", 00:13:23.405 "trsvcid": "4420", 00:13:23.405 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:23.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.405 "prchk_reftag": false, 00:13:23.405 "prchk_guard": false, 00:13:23.405 "hdgst": false, 00:13:23.405 "ddgst": false, 00:13:23.405 "psk": "/tmp/tmp.P4yT23dagx", 00:13:23.405 "method": "bdev_nvme_attach_controller", 00:13:23.405 "req_id": 1 00:13:23.405 } 00:13:23.405 Got JSON-RPC error response 00:13:23.405 response: 00:13:23.405 { 00:13:23.405 "code": -5, 00:13:23.405 "message": "Input/output error" 00:13:23.405 } 00:13:23.405 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72657 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72657 ']' 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72657 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72657 00:13:23.406 killing process with pid 72657 00:13:23.406 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.406 00:13:23.406 Latency(us) 00:13:23.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.406 =================================================================================================================== 00:13:23.406 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72657' 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72657 00:13:23.406 [2024-07-26 07:38:48.982366] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:23.406 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72657 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72690 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72690 /var/tmp/bdevperf.sock 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72690 ']' 00:13:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.972 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.972 [2024-07-26 07:38:49.342629] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:23.973 [2024-07-26 07:38:49.342732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72690 ] 00:13:23.973 [2024-07-26 07:38:49.482571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.230 [2024-07-26 07:38:49.611710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.230 [2024-07-26 07:38:49.684604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:24.796 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.796 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:24.796 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:25.054 [2024-07-26 07:38:50.525992] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:25.054 [2024-07-26 07:38:50.527643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aec00 (9): Bad file descriptor 00:13:25.054 [2024-07-26 07:38:50.528640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:25.054 [2024-07-26 07:38:50.529032] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:25.054 [2024-07-26 07:38:50.529276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:25.054 request: 00:13:25.054 { 00:13:25.054 "name": "TLSTEST", 00:13:25.054 "trtype": "tcp", 00:13:25.054 "traddr": "10.0.0.2", 00:13:25.054 "adrfam": "ipv4", 00:13:25.054 "trsvcid": "4420", 00:13:25.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:25.054 "prchk_reftag": false, 00:13:25.054 "prchk_guard": false, 00:13:25.054 "hdgst": false, 00:13:25.054 "ddgst": false, 00:13:25.054 "method": "bdev_nvme_attach_controller", 00:13:25.054 "req_id": 1 00:13:25.054 } 00:13:25.054 Got JSON-RPC error response 00:13:25.054 response: 00:13:25.054 { 00:13:25.054 "code": -5, 00:13:25.054 "message": "Input/output error" 00:13:25.054 } 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72690 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72690 ']' 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72690 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72690 00:13:25.054 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:25.054 killing process with pid 72690 00:13:25.054 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.054 00:13:25.055 Latency(us) 00:13:25.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.055 =================================================================================================================== 00:13:25.055 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:25.055 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:25.055 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72690' 00:13:25.055 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72690 00:13:25.055 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72690 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 72243 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72243 ']' 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72243 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72243 00:13:25.313 killing process with pid 72243 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72243' 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72243 00:13:25.313 [2024-07-26 07:38:50.903665] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:25.313 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72243 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.leHafEpJhQ 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.leHafEpJhQ 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72728 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72728 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72728 ']' 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.879 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.879 [2024-07-26 07:38:51.353594] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:25.879 [2024-07-26 07:38:51.354683] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.159 [2024-07-26 07:38:51.495566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.159 [2024-07-26 07:38:51.603014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.159 [2024-07-26 07:38:51.603079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.159 [2024-07-26 07:38:51.603106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.159 [2024-07-26 07:38:51.603114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.159 [2024-07-26 07:38:51.603121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.159 [2024-07-26 07:38:51.603150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.159 [2024-07-26 07:38:51.677103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.leHafEpJhQ 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.leHafEpJhQ 00:13:26.733 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:26.992 [2024-07-26 07:38:52.546467] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.992 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:27.250 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:27.509 [2024-07-26 07:38:53.038539] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:27.509 [2024-07-26 07:38:53.038807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.509 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:27.766 malloc0 00:13:27.766 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:28.024 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:13:28.281 [2024-07-26 07:38:53.824771] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.leHafEpJhQ 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.leHafEpJhQ' 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72782 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72782 /var/tmp/bdevperf.sock 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72782 ']' 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.281 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.539 [2024-07-26 07:38:53.886130] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:28.539 [2024-07-26 07:38:53.886401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72782 ] 00:13:28.539 [2024-07-26 07:38:54.024285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.797 [2024-07-26 07:38:54.153949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.798 [2024-07-26 07:38:54.229616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:29.365 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.365 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:29.365 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:13:29.623 [2024-07-26 07:38:55.046709] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.623 [2024-07-26 07:38:55.046882] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:29.623 TLSTESTn1 00:13:29.623 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:29.881 Running I/O for 10 seconds... 00:13:39.861 00:13:39.861 Latency(us) 00:13:39.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.861 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:39.861 Verification LBA range: start 0x0 length 0x2000 00:13:39.861 TLSTESTn1 : 10.02 4252.42 16.61 0.00 0.00 30042.85 5987.61 28835.84 00:13:39.861 =================================================================================================================== 00:13:39.861 Total : 4252.42 16.61 0.00 0.00 30042.85 5987.61 28835.84 00:13:39.861 0 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72782 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72782 ']' 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72782 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72782 00:13:39.861 killing process with pid 72782 00:13:39.861 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.861 00:13:39.861 Latency(us) 00:13:39.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.861 =================================================================================================================== 00:13:39.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72782' 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72782 00:13:39.861 [2024-07-26 07:39:05.315502] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:39.861 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72782 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.leHafEpJhQ 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.leHafEpJhQ 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.leHafEpJhQ 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.leHafEpJhQ 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.leHafEpJhQ' 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72917 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72917 /var/tmp/bdevperf.sock 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72917 ']' 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.120 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.120 [2024-07-26 07:39:05.670624] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:40.120 [2024-07-26 07:39:05.670728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72917 ] 00:13:40.378 [2024-07-26 07:39:05.809926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.378 [2024-07-26 07:39:05.920978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.637 [2024-07-26 07:39:05.994068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:41.204 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.204 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:41.204 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:13:41.204 [2024-07-26 07:39:06.795546] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.204 [2024-07-26 07:39:06.795660] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:41.204 [2024-07-26 07:39:06.795672] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.leHafEpJhQ 00:13:41.204 request: 00:13:41.204 { 00:13:41.204 "name": "TLSTEST", 00:13:41.204 "trtype": "tcp", 00:13:41.204 "traddr": "10.0.0.2", 00:13:41.204 "adrfam": "ipv4", 00:13:41.204 "trsvcid": "4420", 00:13:41.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.204 "prchk_reftag": false, 00:13:41.204 "prchk_guard": false, 00:13:41.204 "hdgst": false, 00:13:41.204 "ddgst": false, 00:13:41.204 "psk": "/tmp/tmp.leHafEpJhQ", 00:13:41.204 "method": "bdev_nvme_attach_controller", 00:13:41.204 "req_id": 1 00:13:41.204 } 00:13:41.204 Got JSON-RPC error response 00:13:41.204 response: 00:13:41.204 { 00:13:41.204 "code": -1, 00:13:41.204 "message": "Operation not permitted" 00:13:41.204 } 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72917 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72917 ']' 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72917 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72917 00:13:41.462 killing process with pid 72917 00:13:41.462 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.462 00:13:41.462 Latency(us) 00:13:41.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.462 =================================================================================================================== 00:13:41.462 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72917' 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72917 00:13:41.462 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72917 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72728 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72728 ']' 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72728 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72728 00:13:41.721 killing process with pid 72728 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72728' 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72728 00:13:41.721 [2024-07-26 07:39:07.152111] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:41.721 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72728 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72956 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72956 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72956 ']' 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.980 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.980 [2024-07-26 07:39:07.521876] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:41.980 [2024-07-26 07:39:07.521959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.238 [2024-07-26 07:39:07.652257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.238 [2024-07-26 07:39:07.756457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.238 [2024-07-26 07:39:07.756559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.238 [2024-07-26 07:39:07.756587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.238 [2024-07-26 07:39:07.756595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.238 [2024-07-26 07:39:07.756603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.238 [2024-07-26 07:39:07.756631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.238 [2024-07-26 07:39:07.828225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:43.173 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.173 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:43.173 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.173 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.173 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.173 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.leHafEpJhQ 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.leHafEpJhQ 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.leHafEpJhQ 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.leHafEpJhQ 00:13:43.174 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:43.431 [2024-07-26 07:39:08.794744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.431 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:43.689 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:43.689 [2024-07-26 07:39:09.274809] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:43.689 [2024-07-26 07:39:09.275063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.947 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:43.947 malloc0 00:13:43.947 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:44.206 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:13:44.464 [2024-07-26 07:39:09.945121] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:44.464 [2024-07-26 07:39:09.945193] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:44.464 [2024-07-26 07:39:09.945227] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:44.464 request: 00:13:44.464 { 00:13:44.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.464 "host": "nqn.2016-06.io.spdk:host1", 00:13:44.464 "psk": "/tmp/tmp.leHafEpJhQ", 00:13:44.464 "method": "nvmf_subsystem_add_host", 00:13:44.464 "req_id": 1 00:13:44.464 } 00:13:44.464 Got JSON-RPC error response 00:13:44.464 response: 00:13:44.464 { 00:13:44.464 "code": -32603, 00:13:44.464 "message": "Internal error" 00:13:44.464 } 00:13:44.464 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:44.464 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.464 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:44.464 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72956 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72956 ']' 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72956 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72956 00:13:44.465 killing process with pid 72956 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72956' 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72956 00:13:44.465 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72956 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.leHafEpJhQ 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73013 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73013 00:13:44.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73013 ']' 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.723 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.982 [2024-07-26 07:39:10.377948] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:44.982 [2024-07-26 07:39:10.378255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.982 [2024-07-26 07:39:10.514098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.240 [2024-07-26 07:39:10.622642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.240 [2024-07-26 07:39:10.623335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.240 [2024-07-26 07:39:10.623437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.240 [2024-07-26 07:39:10.623554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.240 [2024-07-26 07:39:10.623622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.240 [2024-07-26 07:39:10.623711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.240 [2024-07-26 07:39:10.695305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.leHafEpJhQ 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.leHafEpJhQ 00:13:45.807 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:46.065 [2024-07-26 07:39:11.585478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.065 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:46.323 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:46.581 [2024-07-26 07:39:12.041608] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.581 [2024-07-26 07:39:12.041861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.581 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.844 malloc0 00:13:46.844 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:47.105 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:13:47.105 [2024-07-26 07:39:12.692314] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:47.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73068 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73068 /var/tmp/bdevperf.sock 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73068 ']' 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.363 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.363 [2024-07-26 07:39:12.767081] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:47.363 [2024-07-26 07:39:12.767410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73068 ] 00:13:47.363 [2024-07-26 07:39:12.907374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.621 [2024-07-26 07:39:13.028643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.621 [2024-07-26 07:39:13.106365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.187 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.187 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:48.187 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:13:48.444 [2024-07-26 07:39:13.853454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.444 [2024-07-26 07:39:13.853991] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:48.444 TLSTESTn1 00:13:48.444 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:48.702 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:48.702 "subsystems": [ 00:13:48.702 { 00:13:48.702 "subsystem": "keyring", 00:13:48.702 "config": [] 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "subsystem": "iobuf", 00:13:48.702 "config": [ 00:13:48.702 { 00:13:48.702 "method": "iobuf_set_options", 00:13:48.702 "params": { 00:13:48.702 "small_pool_count": 8192, 00:13:48.702 "large_pool_count": 1024, 00:13:48.702 "small_bufsize": 8192, 00:13:48.702 "large_bufsize": 135168 00:13:48.702 } 00:13:48.702 } 00:13:48.702 ] 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "subsystem": "sock", 00:13:48.702 "config": [ 00:13:48.702 { 00:13:48.702 "method": "sock_set_default_impl", 00:13:48.702 "params": { 00:13:48.702 "impl_name": "uring" 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "sock_impl_set_options", 00:13:48.702 "params": { 00:13:48.702 "impl_name": "ssl", 00:13:48.702 "recv_buf_size": 4096, 00:13:48.702 "send_buf_size": 4096, 00:13:48.702 "enable_recv_pipe": true, 00:13:48.702 "enable_quickack": false, 00:13:48.702 "enable_placement_id": 0, 00:13:48.702 "enable_zerocopy_send_server": true, 00:13:48.702 "enable_zerocopy_send_client": false, 00:13:48.702 "zerocopy_threshold": 0, 00:13:48.702 "tls_version": 0, 00:13:48.702 "enable_ktls": false 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "sock_impl_set_options", 00:13:48.702 "params": { 00:13:48.702 "impl_name": "posix", 00:13:48.702 "recv_buf_size": 2097152, 00:13:48.702 "send_buf_size": 2097152, 00:13:48.702 "enable_recv_pipe": true, 00:13:48.702 "enable_quickack": false, 00:13:48.702 "enable_placement_id": 0, 00:13:48.702 "enable_zerocopy_send_server": true, 00:13:48.702 "enable_zerocopy_send_client": false, 00:13:48.702 "zerocopy_threshold": 0, 00:13:48.702 "tls_version": 0, 00:13:48.702 "enable_ktls": false 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "sock_impl_set_options", 00:13:48.702 "params": { 00:13:48.702 "impl_name": "uring", 00:13:48.702 "recv_buf_size": 2097152, 00:13:48.702 "send_buf_size": 2097152, 00:13:48.702 "enable_recv_pipe": true, 00:13:48.702 "enable_quickack": false, 00:13:48.702 "enable_placement_id": 0, 00:13:48.702 "enable_zerocopy_send_server": false, 00:13:48.702 "enable_zerocopy_send_client": false, 00:13:48.702 "zerocopy_threshold": 0, 00:13:48.702 "tls_version": 0, 00:13:48.702 "enable_ktls": false 00:13:48.702 } 00:13:48.702 } 00:13:48.702 ] 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "subsystem": "vmd", 00:13:48.702 "config": [] 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "subsystem": "accel", 00:13:48.702 "config": [ 00:13:48.702 { 00:13:48.702 "method": "accel_set_options", 00:13:48.702 "params": { 00:13:48.702 "small_cache_size": 128, 00:13:48.702 "large_cache_size": 16, 00:13:48.702 "task_count": 2048, 00:13:48.702 "sequence_count": 2048, 00:13:48.702 "buf_count": 2048 00:13:48.702 } 00:13:48.702 } 00:13:48.702 ] 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "subsystem": "bdev", 00:13:48.702 "config": [ 00:13:48.702 { 00:13:48.702 "method": "bdev_set_options", 00:13:48.702 "params": { 00:13:48.702 "bdev_io_pool_size": 65535, 00:13:48.702 "bdev_io_cache_size": 256, 00:13:48.702 "bdev_auto_examine": true, 00:13:48.702 "iobuf_small_cache_size": 128, 00:13:48.702 "iobuf_large_cache_size": 16 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "bdev_raid_set_options", 00:13:48.702 "params": { 00:13:48.702 "process_window_size_kb": 1024, 00:13:48.702 "process_max_bandwidth_mb_sec": 0 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "bdev_iscsi_set_options", 00:13:48.702 "params": { 00:13:48.702 "timeout_sec": 30 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "bdev_nvme_set_options", 00:13:48.702 "params": { 00:13:48.702 "action_on_timeout": "none", 00:13:48.702 "timeout_us": 0, 00:13:48.702 "timeout_admin_us": 0, 00:13:48.702 "keep_alive_timeout_ms": 10000, 00:13:48.702 "arbitration_burst": 0, 00:13:48.702 "low_priority_weight": 0, 00:13:48.702 "medium_priority_weight": 0, 00:13:48.702 "high_priority_weight": 0, 00:13:48.702 "nvme_adminq_poll_period_us": 10000, 00:13:48.702 "nvme_ioq_poll_period_us": 0, 00:13:48.702 "io_queue_requests": 0, 00:13:48.702 "delay_cmd_submit": true, 00:13:48.702 "transport_retry_count": 4, 00:13:48.702 "bdev_retry_count": 3, 00:13:48.702 "transport_ack_timeout": 0, 00:13:48.702 "ctrlr_loss_timeout_sec": 0, 00:13:48.702 "reconnect_delay_sec": 0, 00:13:48.702 "fast_io_fail_timeout_sec": 0, 00:13:48.702 "disable_auto_failback": false, 00:13:48.702 "generate_uuids": false, 00:13:48.702 "transport_tos": 0, 00:13:48.702 "nvme_error_stat": false, 00:13:48.702 "rdma_srq_size": 0, 00:13:48.702 "io_path_stat": false, 00:13:48.702 "allow_accel_sequence": false, 00:13:48.702 "rdma_max_cq_size": 0, 00:13:48.702 "rdma_cm_event_timeout_ms": 0, 00:13:48.702 "dhchap_digests": [ 00:13:48.702 "sha256", 00:13:48.702 "sha384", 00:13:48.702 "sha512" 00:13:48.702 ], 00:13:48.702 "dhchap_dhgroups": [ 00:13:48.702 "null", 00:13:48.702 "ffdhe2048", 00:13:48.702 "ffdhe3072", 00:13:48.702 "ffdhe4096", 00:13:48.702 "ffdhe6144", 00:13:48.702 "ffdhe8192" 00:13:48.702 ] 00:13:48.702 } 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "bdev_nvme_set_hotplug", 00:13:48.702 "params": { 00:13:48.702 "period_us": 100000, 00:13:48.702 "enable": false 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "bdev_malloc_create", 00:13:48.703 "params": { 00:13:48.703 "name": "malloc0", 00:13:48.703 "num_blocks": 8192, 00:13:48.703 "block_size": 4096, 00:13:48.703 "physical_block_size": 4096, 00:13:48.703 "uuid": "6cdd9548-5b83-4767-a9b2-e7f8dcf0f53f", 00:13:48.703 "optimal_io_boundary": 0, 00:13:48.703 "md_size": 0, 00:13:48.703 "dif_type": 0, 00:13:48.703 "dif_is_head_of_md": false, 00:13:48.703 "dif_pi_format": 0 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "bdev_wait_for_examine" 00:13:48.703 } 00:13:48.703 ] 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "subsystem": "nbd", 00:13:48.703 "config": [] 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "subsystem": "scheduler", 00:13:48.703 "config": [ 00:13:48.703 { 00:13:48.703 "method": "framework_set_scheduler", 00:13:48.703 "params": { 00:13:48.703 "name": "static" 00:13:48.703 } 00:13:48.703 } 00:13:48.703 ] 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "subsystem": "nvmf", 00:13:48.703 "config": [ 00:13:48.703 { 00:13:48.703 "method": "nvmf_set_config", 00:13:48.703 "params": { 00:13:48.703 "discovery_filter": "match_any", 00:13:48.703 "admin_cmd_passthru": { 00:13:48.703 "identify_ctrlr": false 00:13:48.703 } 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_set_max_subsystems", 00:13:48.703 "params": { 00:13:48.703 "max_subsystems": 1024 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_set_crdt", 00:13:48.703 "params": { 00:13:48.703 "crdt1": 0, 00:13:48.703 "crdt2": 0, 00:13:48.703 "crdt3": 0 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_create_transport", 00:13:48.703 "params": { 00:13:48.703 "trtype": "TCP", 00:13:48.703 "max_queue_depth": 128, 00:13:48.703 "max_io_qpairs_per_ctrlr": 127, 00:13:48.703 "in_capsule_data_size": 4096, 00:13:48.703 "max_io_size": 131072, 00:13:48.703 "io_unit_size": 131072, 00:13:48.703 "max_aq_depth": 128, 00:13:48.703 "num_shared_buffers": 511, 00:13:48.703 "buf_cache_size": 4294967295, 00:13:48.703 "dif_insert_or_strip": false, 00:13:48.703 "zcopy": false, 00:13:48.703 "c2h_success": false, 00:13:48.703 "sock_priority": 0, 00:13:48.703 "abort_timeout_sec": 1, 00:13:48.703 "ack_timeout": 0, 00:13:48.703 "data_wr_pool_size": 0 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_create_subsystem", 00:13:48.703 "params": { 00:13:48.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.703 "allow_any_host": false, 00:13:48.703 "serial_number": "SPDK00000000000001", 00:13:48.703 "model_number": "SPDK bdev Controller", 00:13:48.703 "max_namespaces": 10, 00:13:48.703 "min_cntlid": 1, 00:13:48.703 "max_cntlid": 65519, 00:13:48.703 "ana_reporting": false 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_subsystem_add_host", 00:13:48.703 "params": { 00:13:48.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.703 "host": "nqn.2016-06.io.spdk:host1", 00:13:48.703 "psk": "/tmp/tmp.leHafEpJhQ" 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_subsystem_add_ns", 00:13:48.703 "params": { 00:13:48.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.703 "namespace": { 00:13:48.703 "nsid": 1, 00:13:48.703 "bdev_name": "malloc0", 00:13:48.703 "nguid": "6CDD95485B834767A9B2E7F8DCF0F53F", 00:13:48.703 "uuid": "6cdd9548-5b83-4767-a9b2-e7f8dcf0f53f", 00:13:48.703 "no_auto_visible": false 00:13:48.703 } 00:13:48.703 } 00:13:48.703 }, 00:13:48.703 { 00:13:48.703 "method": "nvmf_subsystem_add_listener", 00:13:48.703 "params": { 00:13:48.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.703 "listen_address": { 00:13:48.703 "trtype": "TCP", 00:13:48.703 "adrfam": "IPv4", 00:13:48.703 "traddr": "10.0.0.2", 00:13:48.703 "trsvcid": "4420" 00:13:48.703 }, 00:13:48.703 "secure_channel": true 00:13:48.703 } 00:13:48.703 } 00:13:48.703 ] 00:13:48.703 } 00:13:48.703 ] 00:13:48.703 }' 00:13:48.703 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:48.961 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:48.961 "subsystems": [ 00:13:48.961 { 00:13:48.961 "subsystem": "keyring", 00:13:48.961 "config": [] 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "subsystem": "iobuf", 00:13:48.961 "config": [ 00:13:48.961 { 00:13:48.961 "method": "iobuf_set_options", 00:13:48.961 "params": { 00:13:48.961 "small_pool_count": 8192, 00:13:48.961 "large_pool_count": 1024, 00:13:48.961 "small_bufsize": 8192, 00:13:48.961 "large_bufsize": 135168 00:13:48.961 } 00:13:48.961 } 00:13:48.961 ] 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "subsystem": "sock", 00:13:48.961 "config": [ 00:13:48.961 { 00:13:48.961 "method": "sock_set_default_impl", 00:13:48.961 "params": { 00:13:48.961 "impl_name": "uring" 00:13:48.961 } 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "method": "sock_impl_set_options", 00:13:48.961 "params": { 00:13:48.961 "impl_name": "ssl", 00:13:48.961 "recv_buf_size": 4096, 00:13:48.961 "send_buf_size": 4096, 00:13:48.961 "enable_recv_pipe": true, 00:13:48.961 "enable_quickack": false, 00:13:48.961 "enable_placement_id": 0, 00:13:48.961 "enable_zerocopy_send_server": true, 00:13:48.961 "enable_zerocopy_send_client": false, 00:13:48.961 "zerocopy_threshold": 0, 00:13:48.961 "tls_version": 0, 00:13:48.961 "enable_ktls": false 00:13:48.961 } 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "method": "sock_impl_set_options", 00:13:48.961 "params": { 00:13:48.961 "impl_name": "posix", 00:13:48.961 "recv_buf_size": 2097152, 00:13:48.961 "send_buf_size": 2097152, 00:13:48.961 "enable_recv_pipe": true, 00:13:48.961 "enable_quickack": false, 00:13:48.961 "enable_placement_id": 0, 00:13:48.961 "enable_zerocopy_send_server": true, 00:13:48.961 "enable_zerocopy_send_client": false, 00:13:48.961 "zerocopy_threshold": 0, 00:13:48.961 "tls_version": 0, 00:13:48.961 "enable_ktls": false 00:13:48.961 } 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "method": "sock_impl_set_options", 00:13:48.961 "params": { 00:13:48.961 "impl_name": "uring", 00:13:48.961 "recv_buf_size": 2097152, 00:13:48.961 "send_buf_size": 2097152, 00:13:48.961 "enable_recv_pipe": true, 00:13:48.961 "enable_quickack": false, 00:13:48.961 "enable_placement_id": 0, 00:13:48.961 "enable_zerocopy_send_server": false, 00:13:48.961 "enable_zerocopy_send_client": false, 00:13:48.961 "zerocopy_threshold": 0, 00:13:48.961 "tls_version": 0, 00:13:48.961 "enable_ktls": false 00:13:48.961 } 00:13:48.961 } 00:13:48.961 ] 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "subsystem": "vmd", 00:13:48.961 "config": [] 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "subsystem": "accel", 00:13:48.961 "config": [ 00:13:48.961 { 00:13:48.961 "method": "accel_set_options", 00:13:48.961 "params": { 00:13:48.961 "small_cache_size": 128, 00:13:48.961 "large_cache_size": 16, 00:13:48.961 "task_count": 2048, 00:13:48.961 "sequence_count": 2048, 00:13:48.961 "buf_count": 2048 00:13:48.961 } 00:13:48.961 } 00:13:48.961 ] 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "subsystem": "bdev", 00:13:48.961 "config": [ 00:13:48.961 { 00:13:48.961 "method": "bdev_set_options", 00:13:48.961 "params": { 00:13:48.961 "bdev_io_pool_size": 65535, 00:13:48.961 "bdev_io_cache_size": 256, 00:13:48.961 "bdev_auto_examine": true, 00:13:48.961 "iobuf_small_cache_size": 128, 00:13:48.961 "iobuf_large_cache_size": 16 00:13:48.961 } 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "method": "bdev_raid_set_options", 00:13:48.961 "params": { 00:13:48.961 "process_window_size_kb": 1024, 00:13:48.961 "process_max_bandwidth_mb_sec": 0 00:13:48.961 } 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "method": "bdev_iscsi_set_options", 00:13:48.961 "params": { 00:13:48.961 "timeout_sec": 30 00:13:48.961 } 00:13:48.961 }, 00:13:48.961 { 00:13:48.961 "method": "bdev_nvme_set_options", 00:13:48.961 "params": { 00:13:48.961 "action_on_timeout": "none", 00:13:48.961 "timeout_us": 0, 00:13:48.961 "timeout_admin_us": 0, 00:13:48.961 "keep_alive_timeout_ms": 10000, 00:13:48.961 "arbitration_burst": 0, 00:13:48.961 "low_priority_weight": 0, 00:13:48.961 "medium_priority_weight": 0, 00:13:48.961 "high_priority_weight": 0, 00:13:48.961 "nvme_adminq_poll_period_us": 10000, 00:13:48.961 "nvme_ioq_poll_period_us": 0, 00:13:48.961 "io_queue_requests": 512, 00:13:48.961 "delay_cmd_submit": true, 00:13:48.961 "transport_retry_count": 4, 00:13:48.961 "bdev_retry_count": 3, 00:13:48.961 "transport_ack_timeout": 0, 00:13:48.961 "ctrlr_loss_timeout_sec": 0, 00:13:48.961 "reconnect_delay_sec": 0, 00:13:48.961 "fast_io_fail_timeout_sec": 0, 00:13:48.961 "disable_auto_failback": false, 00:13:48.961 "generate_uuids": false, 00:13:48.961 "transport_tos": 0, 00:13:48.961 "nvme_error_stat": false, 00:13:48.961 "rdma_srq_size": 0, 00:13:48.961 "io_path_stat": false, 00:13:48.961 "allow_accel_sequence": false, 00:13:48.961 "rdma_max_cq_size": 0, 00:13:48.961 "rdma_cm_event_timeout_ms": 0, 00:13:48.961 "dhchap_digests": [ 00:13:48.961 "sha256", 00:13:48.961 "sha384", 00:13:48.961 "sha512" 00:13:48.962 ], 00:13:48.962 "dhchap_dhgroups": [ 00:13:48.962 "null", 00:13:48.962 "ffdhe2048", 00:13:48.962 "ffdhe3072", 00:13:48.962 "ffdhe4096", 00:13:48.962 "ffdhe6144", 00:13:48.962 "ffdhe8192" 00:13:48.962 ] 00:13:48.962 } 00:13:48.962 }, 00:13:48.962 { 00:13:48.962 "method": "bdev_nvme_attach_controller", 00:13:48.962 "params": { 00:13:48.962 "name": "TLSTEST", 00:13:48.962 "trtype": "TCP", 00:13:48.962 "adrfam": "IPv4", 00:13:48.962 "traddr": "10.0.0.2", 00:13:48.962 "trsvcid": "4420", 00:13:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.962 "prchk_reftag": false, 00:13:48.962 "prchk_guard": false, 00:13:48.962 "ctrlr_loss_timeout_sec": 0, 00:13:48.962 "reconnect_delay_sec": 0, 00:13:48.962 "fast_io_fail_timeout_sec": 0, 00:13:48.962 "psk": "/tmp/tmp.leHafEpJhQ", 00:13:48.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:48.962 "hdgst": false, 00:13:48.962 "ddgst": false 00:13:48.962 } 00:13:48.962 }, 00:13:48.962 { 00:13:48.962 "method": "bdev_nvme_set_hotplug", 00:13:48.962 "params": { 00:13:48.962 "period_us": 100000, 00:13:48.962 "enable": false 00:13:48.962 } 00:13:48.962 }, 00:13:48.962 { 00:13:48.962 "method": "bdev_wait_for_examine" 00:13:48.962 } 00:13:48.962 ] 00:13:48.962 }, 00:13:48.962 { 00:13:48.962 "subsystem": "nbd", 00:13:48.962 "config": [] 00:13:48.962 } 00:13:48.962 ] 00:13:48.962 }' 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 73068 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73068 ']' 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73068 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73068 00:13:48.962 killing process with pid 73068 00:13:48.962 Received shutdown signal, test time was about 10.000000 seconds 00:13:48.962 00:13:48.962 Latency(us) 00:13:48.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.962 =================================================================================================================== 00:13:48.962 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73068' 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73068 00:13:48.962 [2024-07-26 07:39:14.532323] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:48.962 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73068 00:13:49.219 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 73013 00:13:49.219 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73013 ']' 00:13:49.219 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73013 00:13:49.219 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73013 00:13:49.477 killing process with pid 73013 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73013' 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73013 00:13:49.477 [2024-07-26 07:39:14.848016] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:49.477 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73013 00:13:49.735 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:49.735 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.735 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:49.735 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.735 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:49.735 "subsystems": [ 00:13:49.735 { 00:13:49.735 "subsystem": "keyring", 00:13:49.735 "config": [] 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "subsystem": "iobuf", 00:13:49.735 "config": [ 00:13:49.735 { 00:13:49.735 "method": "iobuf_set_options", 00:13:49.735 "params": { 00:13:49.735 "small_pool_count": 8192, 00:13:49.735 "large_pool_count": 1024, 00:13:49.735 "small_bufsize": 8192, 00:13:49.735 "large_bufsize": 135168 00:13:49.735 } 00:13:49.735 } 00:13:49.735 ] 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "subsystem": "sock", 00:13:49.735 "config": [ 00:13:49.735 { 00:13:49.735 "method": "sock_set_default_impl", 00:13:49.735 "params": { 00:13:49.735 "impl_name": "uring" 00:13:49.735 } 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "method": "sock_impl_set_options", 00:13:49.735 "params": { 00:13:49.735 "impl_name": "ssl", 00:13:49.735 "recv_buf_size": 4096, 00:13:49.735 "send_buf_size": 4096, 00:13:49.735 "enable_recv_pipe": true, 00:13:49.735 "enable_quickack": false, 00:13:49.735 "enable_placement_id": 0, 00:13:49.735 "enable_zerocopy_send_server": true, 00:13:49.735 "enable_zerocopy_send_client": false, 00:13:49.735 "zerocopy_threshold": 0, 00:13:49.735 "tls_version": 0, 00:13:49.735 "enable_ktls": false 00:13:49.735 } 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "method": "sock_impl_set_options", 00:13:49.735 "params": { 00:13:49.735 "impl_name": "posix", 00:13:49.735 "recv_buf_size": 2097152, 00:13:49.735 "send_buf_size": 2097152, 00:13:49.735 "enable_recv_pipe": true, 00:13:49.735 "enable_quickack": false, 00:13:49.735 "enable_placement_id": 0, 00:13:49.735 "enable_zerocopy_send_server": true, 00:13:49.735 "enable_zerocopy_send_client": false, 00:13:49.735 "zerocopy_threshold": 0, 00:13:49.735 "tls_version": 0, 00:13:49.735 "enable_ktls": false 00:13:49.735 } 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "method": "sock_impl_set_options", 00:13:49.735 "params": { 00:13:49.735 "impl_name": "uring", 00:13:49.735 "recv_buf_size": 2097152, 00:13:49.735 "send_buf_size": 2097152, 00:13:49.735 "enable_recv_pipe": true, 00:13:49.735 "enable_quickack": false, 00:13:49.735 "enable_placement_id": 0, 00:13:49.735 "enable_zerocopy_send_server": false, 00:13:49.735 "enable_zerocopy_send_client": false, 00:13:49.735 "zerocopy_threshold": 0, 00:13:49.735 "tls_version": 0, 00:13:49.735 "enable_ktls": false 00:13:49.735 } 00:13:49.735 } 00:13:49.735 ] 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "subsystem": "vmd", 00:13:49.735 "config": [] 00:13:49.735 }, 00:13:49.735 { 00:13:49.735 "subsystem": "accel", 00:13:49.735 "config": [ 00:13:49.735 { 00:13:49.735 "method": "accel_set_options", 00:13:49.735 "params": { 00:13:49.735 "small_cache_size": 128, 00:13:49.735 "large_cache_size": 16, 00:13:49.735 "task_count": 2048, 00:13:49.736 "sequence_count": 2048, 00:13:49.736 "buf_count": 2048 00:13:49.736 } 00:13:49.736 } 00:13:49.736 ] 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "subsystem": "bdev", 00:13:49.736 "config": [ 00:13:49.736 { 00:13:49.736 "method": "bdev_set_options", 00:13:49.736 "params": { 00:13:49.736 "bdev_io_pool_size": 65535, 00:13:49.736 "bdev_io_cache_size": 256, 00:13:49.736 "bdev_auto_examine": true, 00:13:49.736 "iobuf_small_cache_size": 128, 00:13:49.736 "iobuf_large_cache_size": 16 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "bdev_raid_set_options", 00:13:49.736 "params": { 00:13:49.736 "process_window_size_kb": 1024, 00:13:49.736 "process_max_bandwidth_mb_sec": 0 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "bdev_iscsi_set_options", 00:13:49.736 "params": { 00:13:49.736 "timeout_sec": 30 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "bdev_nvme_set_options", 00:13:49.736 "params": { 00:13:49.736 "action_on_timeout": "none", 00:13:49.736 "timeout_us": 0, 00:13:49.736 "timeout_admin_us": 0, 00:13:49.736 "keep_alive_timeout_ms": 10000, 00:13:49.736 "arbitration_burst": 0, 00:13:49.736 "low_priority_weight": 0, 00:13:49.736 "medium_priority_weight": 0, 00:13:49.736 "high_priority_weight": 0, 00:13:49.736 "nvme_adminq_poll_period_us": 10000, 00:13:49.736 "nvme_ioq_poll_period_us": 0, 00:13:49.736 "io_queue_requests": 0, 00:13:49.736 "delay_cmd_submit": true, 00:13:49.736 "transport_retry_count": 4, 00:13:49.736 "bdev_retry_count": 3, 00:13:49.736 "transport_ack_timeout": 0, 00:13:49.736 "ctrlr_loss_timeout_sec": 0, 00:13:49.736 "reconnect_delay_sec": 0, 00:13:49.736 "fast_io_fail_timeout_sec": 0, 00:13:49.736 "disable_auto_failback": false, 00:13:49.736 "generate_uuids": false, 00:13:49.736 "transport_tos": 0, 00:13:49.736 "nvme_error_stat": false, 00:13:49.736 "rdma_srq_size": 0, 00:13:49.736 "io_path_stat": false, 00:13:49.736 "allow_accel_sequence": false, 00:13:49.736 "rdma_max_cq_size": 0, 00:13:49.736 "rdma_cm_event_timeout_ms": 0, 00:13:49.736 "dhchap_digests": [ 00:13:49.736 "sha256", 00:13:49.736 "sha384", 00:13:49.736 "sha512" 00:13:49.736 ], 00:13:49.736 "dhchap_dhgroups": [ 00:13:49.736 "null", 00:13:49.736 "ffdhe2048", 00:13:49.736 "ffdhe3072", 00:13:49.736 "ffdhe4096", 00:13:49.736 "ffdhe6144", 00:13:49.736 "ffdhe8192" 00:13:49.736 ] 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "bdev_nvme_set_hotplug", 00:13:49.736 "params": { 00:13:49.736 "period_us": 100000, 00:13:49.736 "enable": false 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "bdev_malloc_create", 00:13:49.736 "params": { 00:13:49.736 "name": "malloc0", 00:13:49.736 "num_blocks": 8192, 00:13:49.736 "block_size": 4096, 00:13:49.736 "physical_block_size": 4096, 00:13:49.736 "uuid": "6cdd9548-5b83-4767-a9b2-e7f8dcf0f53f", 00:13:49.736 "optimal_io_boundary": 0, 00:13:49.736 "md_size": 0, 00:13:49.736 "dif_type": 0, 00:13:49.736 "dif_is_head_of_md": false, 00:13:49.736 "dif_pi_format": 0 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "bdev_wait_for_examine" 00:13:49.736 } 00:13:49.736 ] 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "subsystem": "nbd", 00:13:49.736 "config": [] 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "subsystem": "scheduler", 00:13:49.736 "config": [ 00:13:49.736 { 00:13:49.736 "method": "framework_set_scheduler", 00:13:49.736 "params": { 00:13:49.736 "name": "static" 00:13:49.736 } 00:13:49.736 } 00:13:49.736 ] 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "subsystem": "nvmf", 00:13:49.736 "config": [ 00:13:49.736 { 00:13:49.736 "method": "nvmf_set_config", 00:13:49.736 "params": { 00:13:49.736 "discovery_filter": "match_any", 00:13:49.736 "admin_cmd_passthru": { 00:13:49.736 "identify_ctrlr": false 00:13:49.736 } 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_set_max_subsystems", 00:13:49.736 "params": { 00:13:49.736 "max_subsystems": 1024 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_set_crdt", 00:13:49.736 "params": { 00:13:49.736 "crdt1": 0, 00:13:49.736 "crdt2": 0, 00:13:49.736 "crdt3": 0 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_create_transport", 00:13:49.736 "params": { 00:13:49.736 "trtype": "TCP", 00:13:49.736 "max_queue_depth": 128, 00:13:49.736 "max_io_qpairs_per_ctrlr": 127, 00:13:49.736 "in_capsule_data_size": 4096, 00:13:49.736 "max_io_size": 131072, 00:13:49.736 "io_unit_size": 131072, 00:13:49.736 "max_aq_depth": 128, 00:13:49.736 "num_shared_buffers": 511, 00:13:49.736 "buf_cache_size": 4294967295, 00:13:49.736 "dif_insert_or_strip": false, 00:13:49.736 "zcopy": false, 00:13:49.736 "c2h_success": false, 00:13:49.736 "sock_priority": 0, 00:13:49.736 "abort_timeout_sec": 1, 00:13:49.736 "ack_timeout": 0, 00:13:49.736 "data_wr_pool_size": 0 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_create_subsystem", 00:13:49.736 "params": { 00:13:49.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.736 "allow_any_host": false, 00:13:49.736 "serial_number": "SPDK00000000000001", 00:13:49.736 "model_number": "SPDK bdev Controller", 00:13:49.736 "max_namespaces": 10, 00:13:49.736 "min_cntlid": 1, 00:13:49.736 "max_cntlid": 65519, 00:13:49.736 "ana_reporting": false 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_subsystem_add_host", 00:13:49.736 "params": { 00:13:49.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.736 "host": "nqn.2016-06.io.spdk:host1", 00:13:49.736 "psk": "/tmp/tmp.leHafEpJhQ" 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_subsystem_add_ns", 00:13:49.736 "params": { 00:13:49.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.736 "namespace": { 00:13:49.736 "nsid": 1, 00:13:49.736 "bdev_name": "malloc0", 00:13:49.736 "nguid": "6CDD95485B834767A9B2E7F8DCF0F53F", 00:13:49.736 "uuid": "6cdd9548-5b83-4767-a9b2-e7f8dcf0f53f", 00:13:49.736 "no_auto_visible": false 00:13:49.736 } 00:13:49.736 } 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "method": "nvmf_subsystem_add_listener", 00:13:49.736 "params": { 00:13:49.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.736 "listen_address": { 00:13:49.736 "trtype": "TCP", 00:13:49.736 "adrfam": "IPv4", 00:13:49.736 "traddr": "10.0.0.2", 00:13:49.736 "trsvcid": "4420" 00:13:49.736 }, 00:13:49.736 "secure_channel": true 00:13:49.736 } 00:13:49.736 } 00:13:49.736 ] 00:13:49.736 } 00:13:49.736 ] 00:13:49.736 }' 00:13:49.736 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73111 00:13:49.736 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:49.736 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73111 00:13:49.737 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73111 ']' 00:13:49.737 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.737 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.737 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.737 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.737 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.737 [2024-07-26 07:39:15.220782] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:49.737 [2024-07-26 07:39:15.220858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.994 [2024-07-26 07:39:15.354594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.994 [2024-07-26 07:39:15.483741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.994 [2024-07-26 07:39:15.483970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.994 [2024-07-26 07:39:15.484184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.994 [2024-07-26 07:39:15.484290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.994 [2024-07-26 07:39:15.484301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.994 [2024-07-26 07:39:15.484397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.252 [2024-07-26 07:39:15.672963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:50.252 [2024-07-26 07:39:15.752892] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.252 [2024-07-26 07:39:15.768795] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:50.252 [2024-07-26 07:39:15.784820] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:50.252 [2024-07-26 07:39:15.793662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.510 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.510 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:50.510 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.510 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.510 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73143 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73143 /var/tmp/bdevperf.sock 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73143 ']' 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:50.768 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:50.768 "subsystems": [ 00:13:50.768 { 00:13:50.768 "subsystem": "keyring", 00:13:50.768 "config": [] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "iobuf", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "iobuf_set_options", 00:13:50.768 "params": { 00:13:50.768 "small_pool_count": 8192, 00:13:50.768 "large_pool_count": 1024, 00:13:50.769 "small_bufsize": 8192, 00:13:50.769 "large_bufsize": 135168 00:13:50.769 } 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "subsystem": "sock", 00:13:50.769 "config": [ 00:13:50.769 { 00:13:50.769 "method": "sock_set_default_impl", 00:13:50.769 "params": { 00:13:50.769 "impl_name": "uring" 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "sock_impl_set_options", 00:13:50.769 "params": { 00:13:50.769 "impl_name": "ssl", 00:13:50.769 "recv_buf_size": 4096, 00:13:50.769 "send_buf_size": 4096, 00:13:50.769 "enable_recv_pipe": true, 00:13:50.769 "enable_quickack": false, 00:13:50.769 "enable_placement_id": 0, 00:13:50.769 "enable_zerocopy_send_server": true, 00:13:50.769 "enable_zerocopy_send_client": false, 00:13:50.769 "zerocopy_threshold": 0, 00:13:50.769 "tls_version": 0, 00:13:50.769 "enable_ktls": false 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "sock_impl_set_options", 00:13:50.769 "params": { 00:13:50.769 "impl_name": "posix", 00:13:50.769 "recv_buf_size": 2097152, 00:13:50.769 "send_buf_size": 2097152, 00:13:50.769 "enable_recv_pipe": true, 00:13:50.769 "enable_quickack": false, 00:13:50.769 "enable_placement_id": 0, 00:13:50.769 "enable_zerocopy_send_server": true, 00:13:50.769 "enable_zerocopy_send_client": false, 00:13:50.769 "zerocopy_threshold": 0, 00:13:50.769 "tls_version": 0, 00:13:50.769 "enable_ktls": false 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "sock_impl_set_options", 00:13:50.769 "params": { 00:13:50.769 "impl_name": "uring", 00:13:50.769 "recv_buf_size": 2097152, 00:13:50.769 "send_buf_size": 2097152, 00:13:50.769 "enable_recv_pipe": true, 00:13:50.769 "enable_quickack": false, 00:13:50.769 "enable_placement_id": 0, 00:13:50.769 "enable_zerocopy_send_server": false, 00:13:50.769 "enable_zerocopy_send_client": false, 00:13:50.769 "zerocopy_threshold": 0, 00:13:50.769 "tls_version": 0, 00:13:50.769 "enable_ktls": false 00:13:50.769 } 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "subsystem": "vmd", 00:13:50.769 "config": [] 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "subsystem": "accel", 00:13:50.769 "config": [ 00:13:50.769 { 00:13:50.769 "method": "accel_set_options", 00:13:50.769 "params": { 00:13:50.769 "small_cache_size": 128, 00:13:50.769 "large_cache_size": 16, 00:13:50.769 "task_count": 2048, 00:13:50.769 "sequence_count": 2048, 00:13:50.769 "buf_count": 2048 00:13:50.769 } 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "subsystem": "bdev", 00:13:50.769 "config": [ 00:13:50.769 { 00:13:50.769 "method": "bdev_set_options", 00:13:50.769 "params": { 00:13:50.769 "bdev_io_pool_size": 65535, 00:13:50.769 "bdev_io_cache_size": 256, 00:13:50.769 "bdev_auto_examine": true, 00:13:50.769 "iobuf_small_cache_size": 128, 00:13:50.769 "iobuf_large_cache_size": 16 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "bdev_raid_set_options", 00:13:50.769 "params": { 00:13:50.769 "process_window_size_kb": 1024, 00:13:50.769 "process_max_bandwidth_mb_sec": 0 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "bdev_iscsi_set_options", 00:13:50.769 "params": { 00:13:50.769 "timeout_sec": 30 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "bdev_nvme_set_options", 00:13:50.769 "params": { 00:13:50.769 "action_on_timeout": "none", 00:13:50.769 "timeout_us": 0, 00:13:50.769 "timeout_admin_us": 0, 00:13:50.769 "keep_alive_timeout_ms": 10000, 00:13:50.769 "arbitration_burst": 0, 00:13:50.769 "low_priority_weight": 0, 00:13:50.769 "medium_priority_weight": 0, 00:13:50.769 "high_priority_weight": 0, 00:13:50.769 "nvme_adminq_poll_period_us": 10000, 00:13:50.769 "nvme_ioq_poll_period_us": 0, 00:13:50.769 "io_queue_requests": 512, 00:13:50.769 "delay_cmd_submit": true, 00:13:50.769 "transport_retry_count": 4, 00:13:50.769 "bdev_retry_count": 3, 00:13:50.769 "transport_ack_timeout": 0, 00:13:50.769 "ctrlr_loss_timeout_sec": 0, 00:13:50.769 "reconnect_delay_sec": 0, 00:13:50.769 "fast_io_fail_timeout_sec": 0, 00:13:50.769 "disable_aWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.769 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.769 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.769 uto_failback": false, 00:13:50.769 "generate_uuids": false, 00:13:50.769 "transport_tos": 0, 00:13:50.769 "nvme_error_stat": false, 00:13:50.769 "rdma_srq_size": 0, 00:13:50.769 "io_path_stat": false, 00:13:50.769 "allow_accel_sequence": false, 00:13:50.769 "rdma_max_cq_size": 0, 00:13:50.769 "rdma_cm_event_timeout_ms": 0, 00:13:50.769 "dhchap_digests": [ 00:13:50.769 "sha256", 00:13:50.769 "sha384", 00:13:50.769 "sha512" 00:13:50.769 ], 00:13:50.769 "dhchap_dhgroups": [ 00:13:50.769 "null", 00:13:50.769 "ffdhe2048", 00:13:50.769 "ffdhe3072", 00:13:50.769 "ffdhe4096", 00:13:50.769 "ffdhe6144", 00:13:50.769 "ffdhe8192" 00:13:50.769 ] 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "bdev_nvme_attach_controller", 00:13:50.769 "params": { 00:13:50.769 "name": "TLSTEST", 00:13:50.769 "trtype": "TCP", 00:13:50.769 "adrfam": "IPv4", 00:13:50.769 "traddr": "10.0.0.2", 00:13:50.769 "trsvcid": "4420", 00:13:50.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.769 "prchk_reftag": false, 00:13:50.769 "prchk_guard": false, 00:13:50.769 "ctrlr_loss_timeout_sec": 0, 00:13:50.769 "reconnect_delay_sec": 0, 00:13:50.769 "fast_io_fail_timeout_sec": 0, 00:13:50.769 "psk": "/tmp/tmp.leHafEpJhQ", 00:13:50.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.769 "hdgst": false, 00:13:50.769 "ddgst": false 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "bdev_nvme_set_hotplug", 00:13:50.769 "params": { 00:13:50.769 "period_us": 100000, 00:13:50.769 "enable": false 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "bdev_wait_for_examine" 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "subsystem": "nbd", 00:13:50.769 "config": [] 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }' 00:13:50.769 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.769 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.769 [2024-07-26 07:39:16.197240] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:13:50.769 [2024-07-26 07:39:16.197451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73143 ] 00:13:50.769 [2024-07-26 07:39:16.334118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.028 [2024-07-26 07:39:16.459308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.028 [2024-07-26 07:39:16.618949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.285 [2024-07-26 07:39:16.666996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.285 [2024-07-26 07:39:16.667739] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.850 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.850 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:51.851 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:51.851 Running I/O for 10 seconds... 00:14:01.853 00:14:01.853 Latency(us) 00:14:01.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.853 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:01.853 Verification LBA range: start 0x0 length 0x2000 00:14:01.853 TLSTESTn1 : 10.02 4311.42 16.84 0.00 0.00 29623.63 7298.33 32887.16 00:14:01.853 =================================================================================================================== 00:14:01.853 Total : 4311.42 16.84 0.00 0.00 29623.63 7298.33 32887.16 00:14:01.853 0 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 73143 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73143 ']' 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73143 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73143 00:14:01.853 killing process with pid 73143 00:14:01.853 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.853 00:14:01.853 Latency(us) 00:14:01.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.853 =================================================================================================================== 00:14:01.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73143' 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73143 00:14:01.853 [2024-07-26 07:39:27.349197] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:01.853 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73143 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 73111 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73111 ']' 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73111 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73111 00:14:02.112 killing process with pid 73111 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73111' 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73111 00:14:02.112 [2024-07-26 07:39:27.668763] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:02.112 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73111 00:14:02.371 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:02.371 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.371 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.371 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73286 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73286 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73286 ']' 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.630 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.630 [2024-07-26 07:39:28.025028] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:02.630 [2024-07-26 07:39:28.025120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.630 [2024-07-26 07:39:28.155210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.888 [2024-07-26 07:39:28.276113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.888 [2024-07-26 07:39:28.276201] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.888 [2024-07-26 07:39:28.276237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.888 [2024-07-26 07:39:28.276250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.888 [2024-07-26 07:39:28.276262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.888 [2024-07-26 07:39:28.276302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.888 [2024-07-26 07:39:28.356245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:03.455 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.455 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:03.455 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.455 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.455 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.713 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.713 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.leHafEpJhQ 00:14:03.713 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.leHafEpJhQ 00:14:03.713 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:03.713 [2024-07-26 07:39:29.305446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.972 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:04.231 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:04.231 [2024-07-26 07:39:29.789553] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:04.231 [2024-07-26 07:39:29.789817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.231 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:04.489 malloc0 00:14:04.489 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:04.748 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.leHafEpJhQ 00:14:05.006 [2024-07-26 07:39:30.492067] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73336 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73336 /var/tmp/bdevperf.sock 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73336 ']' 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.006 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.006 [2024-07-26 07:39:30.563946] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:05.006 [2024-07-26 07:39:30.564226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73336 ] 00:14:05.264 [2024-07-26 07:39:30.701870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.264 [2024-07-26 07:39:30.835521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.522 [2024-07-26 07:39:30.910902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.087 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.087 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:06.087 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.leHafEpJhQ 00:14:06.087 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:06.345 [2024-07-26 07:39:31.877577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.603 nvme0n1 00:14:06.603 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:06.603 Running I/O for 1 seconds... 00:14:07.536 00:14:07.536 Latency(us) 00:14:07.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.536 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:07.536 Verification LBA range: start 0x0 length 0x2000 00:14:07.536 nvme0n1 : 1.03 4101.55 16.02 0.00 0.00 30877.33 6851.49 18707.55 00:14:07.536 =================================================================================================================== 00:14:07.536 Total : 4101.55 16.02 0.00 0.00 30877.33 6851.49 18707.55 00:14:07.536 0 00:14:07.536 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 73336 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73336 ']' 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73336 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73336 00:14:07.794 killing process with pid 73336 00:14:07.794 Received shutdown signal, test time was about 1.000000 seconds 00:14:07.794 00:14:07.794 Latency(us) 00:14:07.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.794 =================================================================================================================== 00:14:07.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73336' 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73336 00:14:07.794 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73336 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 73286 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73286 ']' 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73286 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73286 00:14:08.052 killing process with pid 73286 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73286' 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73286 00:14:08.052 [2024-07-26 07:39:33.484272] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:08.052 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73286 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73387 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73387 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73387 ']' 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.311 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.311 [2024-07-26 07:39:33.861499] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:08.311 [2024-07-26 07:39:33.862407] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.569 [2024-07-26 07:39:34.002719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.569 [2024-07-26 07:39:34.116388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.569 [2024-07-26 07:39:34.116791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.569 [2024-07-26 07:39:34.116963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.569 [2024-07-26 07:39:34.117098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.569 [2024-07-26 07:39:34.117132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.569 [2024-07-26 07:39:34.117283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.827 [2024-07-26 07:39:34.191887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.393 [2024-07-26 07:39:34.858929] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.393 malloc0 00:14:09.393 [2024-07-26 07:39:34.894199] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:09.393 [2024-07-26 07:39:34.894437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73419 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73419 /var/tmp/bdevperf.sock 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73419 ']' 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.393 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.393 [2024-07-26 07:39:34.981940] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:09.393 [2024-07-26 07:39:34.982199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73419 ] 00:14:09.651 [2024-07-26 07:39:35.122290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.651 [2024-07-26 07:39:35.240685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.908 [2024-07-26 07:39:35.316345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:10.473 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.474 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:10.474 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.leHafEpJhQ 00:14:10.732 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:10.732 [2024-07-26 07:39:36.298206] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.990 nvme0n1 00:14:10.990 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:10.990 Running I/O for 1 seconds... 00:14:11.924 00:14:11.924 Latency(us) 00:14:11.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.924 Verification LBA range: start 0x0 length 0x2000 00:14:11.924 nvme0n1 : 1.02 4035.04 15.76 0.00 0.00 31305.60 2472.49 18945.86 00:14:11.924 =================================================================================================================== 00:14:11.924 Total : 4035.04 15.76 0.00 0.00 31305.60 2472.49 18945.86 00:14:12.182 0 00:14:12.182 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:12.182 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.182 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.182 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.182 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:12.182 "subsystems": [ 00:14:12.182 { 00:14:12.182 "subsystem": "keyring", 00:14:12.182 "config": [ 00:14:12.182 { 00:14:12.182 "method": "keyring_file_add_key", 00:14:12.182 "params": { 00:14:12.182 "name": "key0", 00:14:12.182 "path": "/tmp/tmp.leHafEpJhQ" 00:14:12.183 } 00:14:12.183 } 00:14:12.183 ] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "iobuf", 00:14:12.183 "config": [ 00:14:12.183 { 00:14:12.183 "method": "iobuf_set_options", 00:14:12.183 "params": { 00:14:12.183 "small_pool_count": 8192, 00:14:12.183 "large_pool_count": 1024, 00:14:12.183 "small_bufsize": 8192, 00:14:12.183 "large_bufsize": 135168 00:14:12.183 } 00:14:12.183 } 00:14:12.183 ] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "sock", 00:14:12.183 "config": [ 00:14:12.183 { 00:14:12.183 "method": "sock_set_default_impl", 00:14:12.183 "params": { 00:14:12.183 "impl_name": "uring" 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "sock_impl_set_options", 00:14:12.183 "params": { 00:14:12.183 "impl_name": "ssl", 00:14:12.183 "recv_buf_size": 4096, 00:14:12.183 "send_buf_size": 4096, 00:14:12.183 "enable_recv_pipe": true, 00:14:12.183 "enable_quickack": false, 00:14:12.183 "enable_placement_id": 0, 00:14:12.183 "enable_zerocopy_send_server": true, 00:14:12.183 "enable_zerocopy_send_client": false, 00:14:12.183 "zerocopy_threshold": 0, 00:14:12.183 "tls_version": 0, 00:14:12.183 "enable_ktls": false 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "sock_impl_set_options", 00:14:12.183 "params": { 00:14:12.183 "impl_name": "posix", 00:14:12.183 "recv_buf_size": 2097152, 00:14:12.183 "send_buf_size": 2097152, 00:14:12.183 "enable_recv_pipe": true, 00:14:12.183 "enable_quickack": false, 00:14:12.183 "enable_placement_id": 0, 00:14:12.183 "enable_zerocopy_send_server": true, 00:14:12.183 "enable_zerocopy_send_client": false, 00:14:12.183 "zerocopy_threshold": 0, 00:14:12.183 "tls_version": 0, 00:14:12.183 "enable_ktls": false 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "sock_impl_set_options", 00:14:12.183 "params": { 00:14:12.183 "impl_name": "uring", 00:14:12.183 "recv_buf_size": 2097152, 00:14:12.183 "send_buf_size": 2097152, 00:14:12.183 "enable_recv_pipe": true, 00:14:12.183 "enable_quickack": false, 00:14:12.183 "enable_placement_id": 0, 00:14:12.183 "enable_zerocopy_send_server": false, 00:14:12.183 "enable_zerocopy_send_client": false, 00:14:12.183 "zerocopy_threshold": 0, 00:14:12.183 "tls_version": 0, 00:14:12.183 "enable_ktls": false 00:14:12.183 } 00:14:12.183 } 00:14:12.183 ] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "vmd", 00:14:12.183 "config": [] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "accel", 00:14:12.183 "config": [ 00:14:12.183 { 00:14:12.183 "method": "accel_set_options", 00:14:12.183 "params": { 00:14:12.183 "small_cache_size": 128, 00:14:12.183 "large_cache_size": 16, 00:14:12.183 "task_count": 2048, 00:14:12.183 "sequence_count": 2048, 00:14:12.183 "buf_count": 2048 00:14:12.183 } 00:14:12.183 } 00:14:12.183 ] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "bdev", 00:14:12.183 "config": [ 00:14:12.183 { 00:14:12.183 "method": "bdev_set_options", 00:14:12.183 "params": { 00:14:12.183 "bdev_io_pool_size": 65535, 00:14:12.183 "bdev_io_cache_size": 256, 00:14:12.183 "bdev_auto_examine": true, 00:14:12.183 "iobuf_small_cache_size": 128, 00:14:12.183 "iobuf_large_cache_size": 16 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "bdev_raid_set_options", 00:14:12.183 "params": { 00:14:12.183 "process_window_size_kb": 1024, 00:14:12.183 "process_max_bandwidth_mb_sec": 0 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "bdev_iscsi_set_options", 00:14:12.183 "params": { 00:14:12.183 "timeout_sec": 30 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "bdev_nvme_set_options", 00:14:12.183 "params": { 00:14:12.183 "action_on_timeout": "none", 00:14:12.183 "timeout_us": 0, 00:14:12.183 "timeout_admin_us": 0, 00:14:12.183 "keep_alive_timeout_ms": 10000, 00:14:12.183 "arbitration_burst": 0, 00:14:12.183 "low_priority_weight": 0, 00:14:12.183 "medium_priority_weight": 0, 00:14:12.183 "high_priority_weight": 0, 00:14:12.183 "nvme_adminq_poll_period_us": 10000, 00:14:12.183 "nvme_ioq_poll_period_us": 0, 00:14:12.183 "io_queue_requests": 0, 00:14:12.183 "delay_cmd_submit": true, 00:14:12.183 "transport_retry_count": 4, 00:14:12.183 "bdev_retry_count": 3, 00:14:12.183 "transport_ack_timeout": 0, 00:14:12.183 "ctrlr_loss_timeout_sec": 0, 00:14:12.183 "reconnect_delay_sec": 0, 00:14:12.183 "fast_io_fail_timeout_sec": 0, 00:14:12.183 "disable_auto_failback": false, 00:14:12.183 "generate_uuids": false, 00:14:12.183 "transport_tos": 0, 00:14:12.183 "nvme_error_stat": false, 00:14:12.183 "rdma_srq_size": 0, 00:14:12.183 "io_path_stat": false, 00:14:12.183 "allow_accel_sequence": false, 00:14:12.183 "rdma_max_cq_size": 0, 00:14:12.183 "rdma_cm_event_timeout_ms": 0, 00:14:12.183 "dhchap_digests": [ 00:14:12.183 "sha256", 00:14:12.183 "sha384", 00:14:12.183 "sha512" 00:14:12.183 ], 00:14:12.183 "dhchap_dhgroups": [ 00:14:12.183 "null", 00:14:12.183 "ffdhe2048", 00:14:12.183 "ffdhe3072", 00:14:12.183 "ffdhe4096", 00:14:12.183 "ffdhe6144", 00:14:12.183 "ffdhe8192" 00:14:12.183 ] 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "bdev_nvme_set_hotplug", 00:14:12.183 "params": { 00:14:12.183 "period_us": 100000, 00:14:12.183 "enable": false 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "bdev_malloc_create", 00:14:12.183 "params": { 00:14:12.183 "name": "malloc0", 00:14:12.183 "num_blocks": 8192, 00:14:12.183 "block_size": 4096, 00:14:12.183 "physical_block_size": 4096, 00:14:12.183 "uuid": "dff8e1e2-35c8-4a6a-8f88-6fa028fb9b77", 00:14:12.183 "optimal_io_boundary": 0, 00:14:12.183 "md_size": 0, 00:14:12.183 "dif_type": 0, 00:14:12.183 "dif_is_head_of_md": false, 00:14:12.183 "dif_pi_format": 0 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "bdev_wait_for_examine" 00:14:12.183 } 00:14:12.183 ] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "nbd", 00:14:12.183 "config": [] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "scheduler", 00:14:12.183 "config": [ 00:14:12.183 { 00:14:12.183 "method": "framework_set_scheduler", 00:14:12.183 "params": { 00:14:12.183 "name": "static" 00:14:12.183 } 00:14:12.183 } 00:14:12.183 ] 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "subsystem": "nvmf", 00:14:12.183 "config": [ 00:14:12.183 { 00:14:12.183 "method": "nvmf_set_config", 00:14:12.183 "params": { 00:14:12.183 "discovery_filter": "match_any", 00:14:12.183 "admin_cmd_passthru": { 00:14:12.183 "identify_ctrlr": false 00:14:12.183 } 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "nvmf_set_max_subsystems", 00:14:12.183 "params": { 00:14:12.183 "max_subsystems": 1024 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "nvmf_set_crdt", 00:14:12.183 "params": { 00:14:12.183 "crdt1": 0, 00:14:12.183 "crdt2": 0, 00:14:12.183 "crdt3": 0 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "nvmf_create_transport", 00:14:12.183 "params": { 00:14:12.183 "trtype": "TCP", 00:14:12.183 "max_queue_depth": 128, 00:14:12.183 "max_io_qpairs_per_ctrlr": 127, 00:14:12.183 "in_capsule_data_size": 4096, 00:14:12.183 "max_io_size": 131072, 00:14:12.183 "io_unit_size": 131072, 00:14:12.183 "max_aq_depth": 128, 00:14:12.183 "num_shared_buffers": 511, 00:14:12.183 "buf_cache_size": 4294967295, 00:14:12.183 "dif_insert_or_strip": false, 00:14:12.183 "zcopy": false, 00:14:12.183 "c2h_success": false, 00:14:12.183 "sock_priority": 0, 00:14:12.183 "abort_timeout_sec": 1, 00:14:12.183 "ack_timeout": 0, 00:14:12.183 "data_wr_pool_size": 0 00:14:12.183 } 00:14:12.183 }, 00:14:12.183 { 00:14:12.183 "method": "nvmf_create_subsystem", 00:14:12.183 "params": { 00:14:12.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.183 "allow_any_host": false, 00:14:12.183 "serial_number": "00000000000000000000", 00:14:12.183 "model_number": "SPDK bdev Controller", 00:14:12.183 "max_namespaces": 32, 00:14:12.183 "min_cntlid": 1, 00:14:12.183 "max_cntlid": 65519, 00:14:12.183 "ana_reporting": false 00:14:12.183 } 00:14:12.184 }, 00:14:12.184 { 00:14:12.184 "method": "nvmf_subsystem_add_host", 00:14:12.184 "params": { 00:14:12.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.184 "host": "nqn.2016-06.io.spdk:host1", 00:14:12.184 "psk": "key0" 00:14:12.184 } 00:14:12.184 }, 00:14:12.184 { 00:14:12.184 "method": "nvmf_subsystem_add_ns", 00:14:12.184 "params": { 00:14:12.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.184 "namespace": { 00:14:12.184 "nsid": 1, 00:14:12.184 "bdev_name": "malloc0", 00:14:12.184 "nguid": "DFF8E1E235C84A6A8F886FA028FB9B77", 00:14:12.184 "uuid": "dff8e1e2-35c8-4a6a-8f88-6fa028fb9b77", 00:14:12.184 "no_auto_visible": false 00:14:12.184 } 00:14:12.184 } 00:14:12.184 }, 00:14:12.184 { 00:14:12.184 "method": "nvmf_subsystem_add_listener", 00:14:12.184 "params": { 00:14:12.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.184 "listen_address": { 00:14:12.184 "trtype": "TCP", 00:14:12.184 "adrfam": "IPv4", 00:14:12.184 "traddr": "10.0.0.2", 00:14:12.184 "trsvcid": "4420" 00:14:12.184 }, 00:14:12.184 "secure_channel": false, 00:14:12.184 "sock_impl": "ssl" 00:14:12.184 } 00:14:12.184 } 00:14:12.184 ] 00:14:12.184 } 00:14:12.184 ] 00:14:12.184 }' 00:14:12.184 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:12.443 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:12.443 "subsystems": [ 00:14:12.443 { 00:14:12.443 "subsystem": "keyring", 00:14:12.443 "config": [ 00:14:12.443 { 00:14:12.443 "method": "keyring_file_add_key", 00:14:12.443 "params": { 00:14:12.443 "name": "key0", 00:14:12.443 "path": "/tmp/tmp.leHafEpJhQ" 00:14:12.443 } 00:14:12.443 } 00:14:12.443 ] 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "subsystem": "iobuf", 00:14:12.443 "config": [ 00:14:12.443 { 00:14:12.443 "method": "iobuf_set_options", 00:14:12.443 "params": { 00:14:12.443 "small_pool_count": 8192, 00:14:12.443 "large_pool_count": 1024, 00:14:12.443 "small_bufsize": 8192, 00:14:12.443 "large_bufsize": 135168 00:14:12.443 } 00:14:12.443 } 00:14:12.443 ] 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "subsystem": "sock", 00:14:12.443 "config": [ 00:14:12.443 { 00:14:12.443 "method": "sock_set_default_impl", 00:14:12.443 "params": { 00:14:12.443 "impl_name": "uring" 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "sock_impl_set_options", 00:14:12.443 "params": { 00:14:12.443 "impl_name": "ssl", 00:14:12.443 "recv_buf_size": 4096, 00:14:12.443 "send_buf_size": 4096, 00:14:12.443 "enable_recv_pipe": true, 00:14:12.443 "enable_quickack": false, 00:14:12.443 "enable_placement_id": 0, 00:14:12.443 "enable_zerocopy_send_server": true, 00:14:12.443 "enable_zerocopy_send_client": false, 00:14:12.443 "zerocopy_threshold": 0, 00:14:12.443 "tls_version": 0, 00:14:12.443 "enable_ktls": false 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "sock_impl_set_options", 00:14:12.443 "params": { 00:14:12.443 "impl_name": "posix", 00:14:12.443 "recv_buf_size": 2097152, 00:14:12.443 "send_buf_size": 2097152, 00:14:12.443 "enable_recv_pipe": true, 00:14:12.443 "enable_quickack": false, 00:14:12.443 "enable_placement_id": 0, 00:14:12.443 "enable_zerocopy_send_server": true, 00:14:12.443 "enable_zerocopy_send_client": false, 00:14:12.443 "zerocopy_threshold": 0, 00:14:12.443 "tls_version": 0, 00:14:12.443 "enable_ktls": false 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "sock_impl_set_options", 00:14:12.443 "params": { 00:14:12.443 "impl_name": "uring", 00:14:12.443 "recv_buf_size": 2097152, 00:14:12.443 "send_buf_size": 2097152, 00:14:12.443 "enable_recv_pipe": true, 00:14:12.443 "enable_quickack": false, 00:14:12.443 "enable_placement_id": 0, 00:14:12.443 "enable_zerocopy_send_server": false, 00:14:12.443 "enable_zerocopy_send_client": false, 00:14:12.443 "zerocopy_threshold": 0, 00:14:12.443 "tls_version": 0, 00:14:12.443 "enable_ktls": false 00:14:12.443 } 00:14:12.443 } 00:14:12.443 ] 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "subsystem": "vmd", 00:14:12.443 "config": [] 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "subsystem": "accel", 00:14:12.443 "config": [ 00:14:12.443 { 00:14:12.443 "method": "accel_set_options", 00:14:12.443 "params": { 00:14:12.443 "small_cache_size": 128, 00:14:12.443 "large_cache_size": 16, 00:14:12.443 "task_count": 2048, 00:14:12.443 "sequence_count": 2048, 00:14:12.443 "buf_count": 2048 00:14:12.443 } 00:14:12.443 } 00:14:12.443 ] 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "subsystem": "bdev", 00:14:12.443 "config": [ 00:14:12.443 { 00:14:12.443 "method": "bdev_set_options", 00:14:12.443 "params": { 00:14:12.443 "bdev_io_pool_size": 65535, 00:14:12.443 "bdev_io_cache_size": 256, 00:14:12.443 "bdev_auto_examine": true, 00:14:12.443 "iobuf_small_cache_size": 128, 00:14:12.443 "iobuf_large_cache_size": 16 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "bdev_raid_set_options", 00:14:12.443 "params": { 00:14:12.443 "process_window_size_kb": 1024, 00:14:12.443 "process_max_bandwidth_mb_sec": 0 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "bdev_iscsi_set_options", 00:14:12.443 "params": { 00:14:12.443 "timeout_sec": 30 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "bdev_nvme_set_options", 00:14:12.443 "params": { 00:14:12.443 "action_on_timeout": "none", 00:14:12.443 "timeout_us": 0, 00:14:12.443 "timeout_admin_us": 0, 00:14:12.443 "keep_alive_timeout_ms": 10000, 00:14:12.443 "arbitration_burst": 0, 00:14:12.443 "low_priority_weight": 0, 00:14:12.443 "medium_priority_weight": 0, 00:14:12.443 "high_priority_weight": 0, 00:14:12.443 "nvme_adminq_poll_period_us": 10000, 00:14:12.443 "nvme_ioq_poll_period_us": 0, 00:14:12.443 "io_queue_requests": 512, 00:14:12.443 "delay_cmd_submit": true, 00:14:12.443 "transport_retry_count": 4, 00:14:12.443 "bdev_retry_count": 3, 00:14:12.443 "transport_ack_timeout": 0, 00:14:12.443 "ctrlr_loss_timeout_sec": 0, 00:14:12.443 "reconnect_delay_sec": 0, 00:14:12.443 "fast_io_fail_timeout_sec": 0, 00:14:12.443 "disable_auto_failback": false, 00:14:12.443 "generate_uuids": false, 00:14:12.443 "transport_tos": 0, 00:14:12.443 "nvme_error_stat": false, 00:14:12.443 "rdma_srq_size": 0, 00:14:12.443 "io_path_stat": false, 00:14:12.443 "allow_accel_sequence": false, 00:14:12.443 "rdma_max_cq_size": 0, 00:14:12.443 "rdma_cm_event_timeout_ms": 0, 00:14:12.443 "dhchap_digests": [ 00:14:12.443 "sha256", 00:14:12.443 "sha384", 00:14:12.443 "sha512" 00:14:12.443 ], 00:14:12.443 "dhchap_dhgroups": [ 00:14:12.443 "null", 00:14:12.443 "ffdhe2048", 00:14:12.443 "ffdhe3072", 00:14:12.443 "ffdhe4096", 00:14:12.443 "ffdhe6144", 00:14:12.443 "ffdhe8192" 00:14:12.443 ] 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "bdev_nvme_attach_controller", 00:14:12.443 "params": { 00:14:12.443 "name": "nvme0", 00:14:12.443 "trtype": "TCP", 00:14:12.443 "adrfam": "IPv4", 00:14:12.443 "traddr": "10.0.0.2", 00:14:12.443 "trsvcid": "4420", 00:14:12.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.443 "prchk_reftag": false, 00:14:12.443 "prchk_guard": false, 00:14:12.443 "ctrlr_loss_timeout_sec": 0, 00:14:12.443 "reconnect_delay_sec": 0, 00:14:12.443 "fast_io_fail_timeout_sec": 0, 00:14:12.443 "psk": "key0", 00:14:12.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.443 "hdgst": false, 00:14:12.443 "ddgst": false 00:14:12.443 } 00:14:12.443 }, 00:14:12.443 { 00:14:12.443 "method": "bdev_nvme_set_hotplug", 00:14:12.444 "params": { 00:14:12.444 "period_us": 100000, 00:14:12.444 "enable": false 00:14:12.444 } 00:14:12.444 }, 00:14:12.444 { 00:14:12.444 "method": "bdev_enable_histogram", 00:14:12.444 "params": { 00:14:12.444 "name": "nvme0n1", 00:14:12.444 "enable": true 00:14:12.444 } 00:14:12.444 }, 00:14:12.444 { 00:14:12.444 "method": "bdev_wait_for_examine" 00:14:12.444 } 00:14:12.444 ] 00:14:12.444 }, 00:14:12.444 { 00:14:12.444 "subsystem": "nbd", 00:14:12.444 "config": [] 00:14:12.444 } 00:14:12.444 ] 00:14:12.444 }' 00:14:12.444 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 73419 00:14:12.444 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73419 ']' 00:14:12.444 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73419 00:14:12.444 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73419 00:14:12.444 killing process with pid 73419 00:14:12.444 Received shutdown signal, test time was about 1.000000 seconds 00:14:12.444 00:14:12.444 Latency(us) 00:14:12.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.444 =================================================================================================================== 00:14:12.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73419' 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73419 00:14:12.444 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73419 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 73387 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73387 ']' 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73387 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73387 00:14:13.011 killing process with pid 73387 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73387' 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73387 00:14:13.011 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73387 00:14:13.270 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:13.270 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.270 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:13.270 "subsystems": [ 00:14:13.270 { 00:14:13.270 "subsystem": "keyring", 00:14:13.270 "config": [ 00:14:13.270 { 00:14:13.270 "method": "keyring_file_add_key", 00:14:13.270 "params": { 00:14:13.270 "name": "key0", 00:14:13.270 "path": "/tmp/tmp.leHafEpJhQ" 00:14:13.270 } 00:14:13.270 } 00:14:13.270 ] 00:14:13.270 }, 00:14:13.270 { 00:14:13.270 "subsystem": "iobuf", 00:14:13.270 "config": [ 00:14:13.270 { 00:14:13.270 "method": "iobuf_set_options", 00:14:13.270 "params": { 00:14:13.270 "small_pool_count": 8192, 00:14:13.270 "large_pool_count": 1024, 00:14:13.270 "small_bufsize": 8192, 00:14:13.270 "large_bufsize": 135168 00:14:13.270 } 00:14:13.270 } 00:14:13.270 ] 00:14:13.270 }, 00:14:13.270 { 00:14:13.270 "subsystem": "sock", 00:14:13.270 "config": [ 00:14:13.270 { 00:14:13.270 "method": "sock_set_default_impl", 00:14:13.270 "params": { 00:14:13.270 "impl_name": "uring" 00:14:13.270 } 00:14:13.270 }, 00:14:13.270 { 00:14:13.270 "method": "sock_impl_set_options", 00:14:13.270 "params": { 00:14:13.270 "impl_name": "ssl", 00:14:13.270 "recv_buf_size": 4096, 00:14:13.270 "send_buf_size": 4096, 00:14:13.270 "enable_recv_pipe": true, 00:14:13.270 "enable_quickack": false, 00:14:13.270 "enable_placement_id": 0, 00:14:13.271 "enable_zerocopy_send_server": true, 00:14:13.271 "enable_zerocopy_send_client": false, 00:14:13.271 "zerocopy_threshold": 0, 00:14:13.271 "tls_version": 0, 00:14:13.271 "enable_ktls": false 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "sock_impl_set_options", 00:14:13.271 "params": { 00:14:13.271 "impl_name": "posix", 00:14:13.271 "recv_buf_size": 2097152, 00:14:13.271 "send_buf_size": 2097152, 00:14:13.271 "enable_recv_pipe": true, 00:14:13.271 "enable_quickack": false, 00:14:13.271 "enable_placement_id": 0, 00:14:13.271 "enable_zerocopy_send_server": true, 00:14:13.271 "enable_zerocopy_send_client": false, 00:14:13.271 "zerocopy_threshold": 0, 00:14:13.271 "tls_version": 0, 00:14:13.271 "enable_ktls": false 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "sock_impl_set_options", 00:14:13.271 "params": { 00:14:13.271 "impl_name": "uring", 00:14:13.271 "recv_buf_size": 2097152, 00:14:13.271 "send_buf_size": 2097152, 00:14:13.271 "enable_recv_pipe": true, 00:14:13.271 "enable_quickack": false, 00:14:13.271 "enable_placement_id": 0, 00:14:13.271 "enable_zerocopy_send_server": false, 00:14:13.271 "enable_zerocopy_send_client": false, 00:14:13.271 "zerocopy_threshold": 0, 00:14:13.271 "tls_version": 0, 00:14:13.271 "enable_ktls": false 00:14:13.271 } 00:14:13.271 } 00:14:13.271 ] 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "subsystem": "vmd", 00:14:13.271 "config": [] 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "subsystem": "accel", 00:14:13.271 "config": [ 00:14:13.271 { 00:14:13.271 "method": "accel_set_options", 00:14:13.271 "params": { 00:14:13.271 "small_cache_size": 128, 00:14:13.271 "large_cache_size": 16, 00:14:13.271 "task_count": 2048, 00:14:13.271 "sequence_count": 2048, 00:14:13.271 "buf_count": 2048 00:14:13.271 } 00:14:13.271 } 00:14:13.271 ] 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "subsystem": "bdev", 00:14:13.271 "config": [ 00:14:13.271 { 00:14:13.271 "method": "bdev_set_options", 00:14:13.271 "params": { 00:14:13.271 "bdev_io_pool_size": 65535, 00:14:13.271 "bdev_io_cache_size": 256, 00:14:13.271 "bdev_auto_examine": true, 00:14:13.271 "iobuf_small_cache_size": 128, 00:14:13.271 "iobuf_large_cache_size": 16 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "bdev_raid_set_options", 00:14:13.271 "params": { 00:14:13.271 "process_window_size_kb": 1024, 00:14:13.271 "process_max_bandwidth_mb_sec": 0 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "bdev_iscsi_set_options", 00:14:13.271 "params": { 00:14:13.271 "timeout_sec": 30 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "bdev_nvme_set_options", 00:14:13.271 "params": { 00:14:13.271 "action_on_timeout": "none", 00:14:13.271 "timeout_us": 0, 00:14:13.271 "timeout_admin_us": 0, 00:14:13.271 "keep_alive_timeout_ms": 10000, 00:14:13.271 "arbitration_burst": 0, 00:14:13.271 "low_priority_weight": 0, 00:14:13.271 "medium_priority_weight": 0, 00:14:13.271 "high_priority_weight": 0, 00:14:13.271 "nvme_adminq_poll_period_us": 10000, 00:14:13.271 "nvme_ioq_poll_period_us": 0, 00:14:13.271 "io_queue_requests": 0, 00:14:13.271 "delay_cmd_submit": true, 00:14:13.271 "transport_retry_count": 4, 00:14:13.271 "bdev_retry_count": 3, 00:14:13.271 "transport_ack_timeout": 0, 00:14:13.271 "ctrlr_loss_timeout_sec": 0, 00:14:13.271 "reconnect_delay_sec": 0, 00:14:13.271 "fast_io_fail_timeout_sec": 0, 00:14:13.271 "disable_auto_failback": false, 00:14:13.271 "generate_uuids": false, 00:14:13.271 "transport_tos": 0, 00:14:13.271 "nvme_error_stat": false, 00:14:13.271 "rdma_srq_size": 0, 00:14:13.271 "io_path_stat": false, 00:14:13.271 "allow_accel_sequence": false, 00:14:13.271 "rdma_max_cq_size": 0, 00:14:13.271 "rdma_cm_event_timeout_ms": 0, 00:14:13.271 "dhchap_digests": [ 00:14:13.271 "sha256", 00:14:13.271 "sha384", 00:14:13.271 "sha512" 00:14:13.271 ], 00:14:13.271 "dhchap_dhgroups": [ 00:14:13.271 "null", 00:14:13.271 "ffdhe2048", 00:14:13.271 "ffdhe3072", 00:14:13.271 "ffdhe4096", 00:14:13.271 "ffdhe6144", 00:14:13.271 "ffdhe8192" 00:14:13.271 ] 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "bdev_nvme_set_hotplug", 00:14:13.271 "params": { 00:14:13.271 "period_us": 100000, 00:14:13.271 "enable": false 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "bdev_malloc_create", 00:14:13.271 "params": { 00:14:13.271 "name": "malloc0", 00:14:13.271 "num_blocks": 8192, 00:14:13.271 "block_size": 4096, 00:14:13.271 "physical_block_size": 4096, 00:14:13.271 "uuid": "dff8e1e2-35c8-4a6a-8f88-6fa028fb9b77", 00:14:13.271 "optimal_io_boundary": 0, 00:14:13.271 "md_size": 0, 00:14:13.271 "dif_type": 0, 00:14:13.271 "dif_is_head_of_md": false, 00:14:13.271 "dif_pi_format": 0 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "bdev_wait_for_examine" 00:14:13.271 } 00:14:13.271 ] 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "subsystem": "nbd", 00:14:13.271 "config": [] 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "subsystem": "scheduler", 00:14:13.271 "config": [ 00:14:13.271 { 00:14:13.271 "method": "framework_set_scheduler", 00:14:13.271 "params": { 00:14:13.271 "name": "static" 00:14:13.271 } 00:14:13.271 } 00:14:13.271 ] 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "subsystem": "nvmf", 00:14:13.271 "config": [ 00:14:13.271 { 00:14:13.271 "method": "nvmf_set_config", 00:14:13.271 "params": { 00:14:13.271 "discovery_filter": "match_any", 00:14:13.271 "admin_cmd_passthru": { 00:14:13.271 "identify_ctrlr": false 00:14:13.271 } 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "nvmf_set_max_subsystems", 00:14:13.271 "params": { 00:14:13.271 "max_subsystems": 1024 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "nvmf_set_crdt", 00:14:13.271 "params": { 00:14:13.271 "crdt1": 0, 00:14:13.271 "crdt2": 0, 00:14:13.271 "crdt3": 0 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "nvmf_create_transport", 00:14:13.271 "params": { 00:14:13.271 "trtype": "TCP", 00:14:13.271 "max_queue_depth": 128, 00:14:13.271 "max_io_qpairs_per_ctrlr": 127, 00:14:13.271 "in_capsule_data_size": 4096, 00:14:13.271 "max_io_size": 131072, 00:14:13.271 "io_unit_size": 131072, 00:14:13.271 "max_aq_depth": 128, 00:14:13.271 "num_shared_buffers": 511, 00:14:13.271 "buf_cache_size": 4294967295, 00:14:13.271 "dif_insert_or_strip": false, 00:14:13.271 "zcopy": false, 00:14:13.271 "c2h_success": false, 00:14:13.271 "sock_priority": 0, 00:14:13.271 "abort_timeout_sec": 1, 00:14:13.271 "ack_timeout": 0, 00:14:13.271 "data_wr_pool_size": 0 00:14:13.271 } 00:14:13.271 }, 00:14:13.271 { 00:14:13.271 "method": "nvmf_create_subsystem", 00:14:13.271 "params": { 00:14:13.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.271 "allow_any_host": false, 00:14:13.271 "serial_number": "00000000000000000000", 00:14:13.271 "model_number": "SPDK bdev Controller", 00:14:13.271 "max_namespaces": 32, 00:14:13.272 "min_cntlid": 1, 00:14:13.272 "max_cntlid": 65519, 00:14:13.272 "ana_reporting": false 00:14:13.272 } 00:14:13.272 }, 00:14:13.272 { 00:14:13.272 "method": "nvmf_subsystem_add_host", 00:14:13.272 "params": { 00:14:13.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.272 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.272 "psk": "key0" 00:14:13.272 } 00:14:13.272 }, 00:14:13.272 { 00:14:13.272 "method": "nvmf_subsystem_add_ns", 00:14:13.272 "params": { 00:14:13.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.272 "namespace": { 00:14:13.272 "nsid": 1, 00:14:13.272 "bdev_name": "malloc0", 00:14:13.272 "nguid": "DFF8E1E235C84A6A8F886FA028FB9B77", 00:14:13.272 "uuid": "dff8e1e2-35c8-4a6a-8f88-6fa028fb9b77", 00:14:13.272 "no_auto_visible": false 00:14:13.272 } 00:14:13.272 } 00:14:13.272 }, 00:14:13.272 { 00:14:13.272 "method": "nvmf_subsystem_add_listener", 00:14:13.272 "params": { 00:14:13.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.272 "listen_address": { 00:14:13.272 "trtype": "TCP", 00:14:13.272 "adrfam": "IPv4", 00:14:13.272 "traddr": "10.0.0.2", 00:14:13.272 "trsvcid": "4420" 00:14:13.272 }, 00:14:13.272 "secure_channel": false, 00:14:13.272 "sock_impl": "ssl" 00:14:13.272 } 00:14:13.272 } 00:14:13.272 ] 00:14:13.272 } 00:14:13.272 ] 00:14:13.272 }' 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73481 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73481 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73481 ']' 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.272 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.272 [2024-07-26 07:39:38.686818] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:13.272 [2024-07-26 07:39:38.687066] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.272 [2024-07-26 07:39:38.821676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.530 [2024-07-26 07:39:38.931445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.530 [2024-07-26 07:39:38.931831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.530 [2024-07-26 07:39:38.932003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.530 [2024-07-26 07:39:38.932133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.530 [2024-07-26 07:39:38.932166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.530 [2024-07-26 07:39:38.932347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.530 [2024-07-26 07:39:39.119917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.819 [2024-07-26 07:39:39.213963] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.819 [2024-07-26 07:39:39.245894] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.819 [2024-07-26 07:39:39.257660] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.077 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.077 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:14.077 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.077 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.077 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73513 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73513 /var/tmp/bdevperf.sock 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73513 ']' 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:14.336 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:14.336 "subsystems": [ 00:14:14.336 { 00:14:14.336 "subsystem": "keyring", 00:14:14.336 "config": [ 00:14:14.336 { 00:14:14.336 "method": "keyring_file_add_key", 00:14:14.336 "params": { 00:14:14.336 "name": "key0", 00:14:14.336 "path": "/tmp/tmp.leHafEpJhQ" 00:14:14.336 } 00:14:14.336 } 00:14:14.336 ] 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "subsystem": "iobuf", 00:14:14.336 "config": [ 00:14:14.336 { 00:14:14.336 "method": "iobuf_set_options", 00:14:14.336 "params": { 00:14:14.336 "small_pool_count": 8192, 00:14:14.336 "large_pool_count": 1024, 00:14:14.336 "small_bufsize": 8192, 00:14:14.336 "large_bufsize": 135168 00:14:14.336 } 00:14:14.336 } 00:14:14.336 ] 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "subsystem": "sock", 00:14:14.336 "config": [ 00:14:14.336 { 00:14:14.336 "method": "sock_set_default_impl", 00:14:14.336 "params": { 00:14:14.336 "impl_name": "uring" 00:14:14.336 } 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "method": "sock_impl_set_options", 00:14:14.336 "params": { 00:14:14.336 "impl_name": "ssl", 00:14:14.336 "recv_buf_size": 4096, 00:14:14.336 "send_buf_size": 4096, 00:14:14.336 "enable_recv_pipe": true, 00:14:14.336 "enable_quickack": false, 00:14:14.336 "enable_placement_id": 0, 00:14:14.336 "enable_zerocopy_send_server": true, 00:14:14.336 "enable_zerocopy_send_client": false, 00:14:14.336 "zerocopy_threshold": 0, 00:14:14.336 "tls_version": 0, 00:14:14.336 "enable_ktls": false 00:14:14.336 } 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "method": "sock_impl_set_options", 00:14:14.336 "params": { 00:14:14.336 "impl_name": "posix", 00:14:14.336 "recv_buf_size": 2097152, 00:14:14.336 "send_buf_size": 2097152, 00:14:14.336 "enable_recv_pipe": true, 00:14:14.336 "enable_quickack": false, 00:14:14.336 "enable_placement_id": 0, 00:14:14.336 "enable_zerocopy_send_server": true, 00:14:14.336 "enable_zerocopy_send_client": false, 00:14:14.336 "zerocopy_threshold": 0, 00:14:14.336 "tls_version": 0, 00:14:14.336 "enable_ktls": false 00:14:14.336 } 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "method": "sock_impl_set_options", 00:14:14.336 "params": { 00:14:14.336 "impl_name": "uring", 00:14:14.336 "recv_buf_size": 2097152, 00:14:14.336 "send_buf_size": 2097152, 00:14:14.336 "enable_recv_pipe": true, 00:14:14.336 "enable_quickack": false, 00:14:14.336 "enable_placement_id": 0, 00:14:14.336 "enable_zerocopy_send_server": false, 00:14:14.336 "enable_zerocopy_send_client": false, 00:14:14.336 "zerocopy_threshold": 0, 00:14:14.336 "tls_version": 0, 00:14:14.336 "enable_ktls": false 00:14:14.336 } 00:14:14.336 } 00:14:14.336 ] 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "subsystem": "vmd", 00:14:14.336 "config": [] 00:14:14.336 }, 00:14:14.336 { 00:14:14.336 "subsystem": "accel", 00:14:14.336 "config": [ 00:14:14.336 { 00:14:14.337 "method": "accel_set_options", 00:14:14.337 "params": { 00:14:14.337 "small_cache_size": 128, 00:14:14.337 "large_cache_size": 16, 00:14:14.337 "task_count": 2048, 00:14:14.337 "sequence_count": 2048, 00:14:14.337 "buf_count": 2048 00:14:14.337 } 00:14:14.337 } 00:14:14.337 ] 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "subsystem": "bdev", 00:14:14.337 "config": [ 00:14:14.337 { 00:14:14.337 "method": "bdev_set_options", 00:14:14.337 "params": { 00:14:14.337 "bdev_io_pool_size": 65535, 00:14:14.337 "bdev_io_cache_size": 256, 00:14:14.337 "bdev_auto_examine": true, 00:14:14.337 "iobuf_small_cache_size": 128, 00:14:14.337 "iobuf_large_cache_size": 16 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_raid_set_options", 00:14:14.337 "params": { 00:14:14.337 "process_window_size_kb": 1024, 00:14:14.337 "process_max_bandwidth_mb_sec": 0 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_iscsi_set_options", 00:14:14.337 "params": { 00:14:14.337 "timeout_sec": 30 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_nvme_set_options", 00:14:14.337 "params": { 00:14:14.337 "action_on_timeout": "none", 00:14:14.337 "timeout_us": 0, 00:14:14.337 "timeout_admin_us": 0, 00:14:14.337 "keep_alive_timeout_ms": 10000, 00:14:14.337 "arbitration_burst": 0, 00:14:14.337 "low_priority_weight": 0, 00:14:14.337 "medium_priority_weight": 0, 00:14:14.337 "high_priority_weight": 0, 00:14:14.337 "nvme_adminq_poll_period_us": 10000, 00:14:14.337 "nvme_ioq_poll_period_us": 0, 00:14:14.337 "io_queue_requests": 512, 00:14:14.337 "delay_cmd_submit": true, 00:14:14.337 "transport_retry_count": 4, 00:14:14.337 "bdev_retry_count": 3, 00:14:14.337 "transport_ack_timeout": 0, 00:14:14.337 "ctrlr_loss_timeout_sec": 0, 00:14:14.337 "reconnect_delay_sec": 0, 00:14:14.337 "fast_io_fail_timeout_sec": 0, 00:14:14.337 "disable_auto_failback": false, 00:14:14.337 "generate_uuids": false, 00:14:14.337 "transport_tos": 0, 00:14:14.337 "nvme_error_stat": false, 00:14:14.337 "rdma_srq_size": 0, 00:14:14.337 "io_path_stat": false, 00:14:14.337 "allow_accel_sequence": false, 00:14:14.337 "rdma_max_cq_size": 0, 00:14:14.337 "rdma_cm_event_timeout_ms": 0, 00:14:14.337 "dhchap_digests": [ 00:14:14.337 "sha256", 00:14:14.337 "sha384", 00:14:14.337 "sha512" 00:14:14.337 ], 00:14:14.337 "dhchap_dhgroups": [ 00:14:14.337 "null", 00:14:14.337 "ffdhe2048", 00:14:14.337 "ffdhe3072", 00:14:14.337 "ffdhe4096", 00:14:14.337 "ffdhe6144", 00:14:14.337 "ffdhe8192" 00:14:14.337 ] 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_nvme_attach_controller", 00:14:14.337 "params": { 00:14:14.337 "name": "nvme0", 00:14:14.337 "trtype": "TCP", 00:14:14.337 "adrfam": "IPv4", 00:14:14.337 "traddr": "10.0.0.2", 00:14:14.337 "trsvcid": "4420", 00:14:14.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.337 "prchk_reftag": false, 00:14:14.337 "prchk_guard": false, 00:14:14.337 "ctrlr_loss_timeout_sec": 0, 00:14:14.337 "reconnect_delay_sec": 0, 00:14:14.337 "fast_io_fail_timeout_sec": 0, 00:14:14.337 "psk": "key0", 00:14:14.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.337 "hdgst": false, 00:14:14.337 "ddgst": false 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_nvme_set_hotplug", 00:14:14.337 "params": { 00:14:14.337 "period_us": 100000, 00:14:14.337 "enable": false 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_enable_histogram", 00:14:14.337 "params": { 00:14:14.337 "name": "nvme0n1", 00:14:14.337 "enable": true 00:14:14.337 } 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "method": "bdev_wait_for_examine" 00:14:14.337 } 00:14:14.337 ] 00:14:14.337 }, 00:14:14.337 { 00:14:14.337 "subsystem": "nbd", 00:14:14.337 "config": [] 00:14:14.337 } 00:14:14.337 ] 00:14:14.337 }' 00:14:14.337 [2024-07-26 07:39:39.737314] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:14.337 [2024-07-26 07:39:39.738230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73513 ] 00:14:14.337 [2024-07-26 07:39:39.879231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.595 [2024-07-26 07:39:40.003302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.595 [2024-07-26 07:39:40.157093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:14.853 [2024-07-26 07:39:40.212142] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.419 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.419 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.419 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:15.419 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:15.419 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.419 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.678 Running I/O for 1 seconds... 00:14:16.611 00:14:16.611 Latency(us) 00:14:16.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.611 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.611 Verification LBA range: start 0x0 length 0x2000 00:14:16.612 nvme0n1 : 1.03 4074.33 15.92 0.00 0.00 30980.60 9294.20 22639.71 00:14:16.612 =================================================================================================================== 00:14:16.612 Total : 4074.33 15.92 0.00 0.00 30980.60 9294.20 22639.71 00:14:16.612 0 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:16.612 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:16.612 nvmf_trace.0 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73513 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73513 ']' 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73513 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73513 00:14:16.870 killing process with pid 73513 00:14:16.870 Received shutdown signal, test time was about 1.000000 seconds 00:14:16.870 00:14:16.870 Latency(us) 00:14:16.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.870 =================================================================================================================== 00:14:16.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73513' 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73513 00:14:16.870 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73513 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.128 rmmod nvme_tcp 00:14:17.128 rmmod nvme_fabrics 00:14:17.128 rmmod nvme_keyring 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73481 ']' 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73481 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73481 ']' 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73481 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73481 00:14:17.128 killing process with pid 73481 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73481' 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73481 00:14:17.128 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73481 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.P4yT23dagx /tmp/tmp.rdsDKaCaxa /tmp/tmp.leHafEpJhQ 00:14:17.695 00:14:17.695 real 1m27.017s 00:14:17.695 user 2m17.254s 00:14:17.695 sys 0m28.162s 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.695 ************************************ 00:14:17.695 END TEST nvmf_tls 00:14:17.695 ************************************ 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.695 ************************************ 00:14:17.695 START TEST nvmf_fips 00:14:17.695 ************************************ 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:17.695 * Looking for test storage... 00:14:17.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.695 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:17.696 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:17.956 Error setting digest 00:14:17.956 00C21843697F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:17.956 00C21843697F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.956 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:17.957 Cannot find device "nvmf_tgt_br" 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.957 Cannot find device "nvmf_tgt_br2" 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:17.957 Cannot find device "nvmf_tgt_br" 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:17.957 Cannot find device "nvmf_tgt_br2" 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:17.957 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:18.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:18.215 00:14:18.215 --- 10.0.0.2 ping statistics --- 00:14:18.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.215 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:18.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:18.215 00:14:18.215 --- 10.0.0.3 ping statistics --- 00:14:18.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.215 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:18.215 00:14:18.215 --- 10.0.0.1 ping statistics --- 00:14:18.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.215 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73778 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73778 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73778 ']' 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.215 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.473 [2024-07-26 07:39:43.868792] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:18.473 [2024-07-26 07:39:43.868896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.473 [2024-07-26 07:39:44.012031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.731 [2024-07-26 07:39:44.136741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.731 [2024-07-26 07:39:44.136796] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.731 [2024-07-26 07:39:44.136811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.731 [2024-07-26 07:39:44.136822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.731 [2024-07-26 07:39:44.136831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.731 [2024-07-26 07:39:44.136873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.731 [2024-07-26 07:39:44.213721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.311 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.311 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:19.312 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.586 [2024-07-26 07:39:45.126434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.586 [2024-07-26 07:39:45.142383] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.586 [2024-07-26 07:39:45.142621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.856 [2024-07-26 07:39:45.177647] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:19.856 malloc0 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73818 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73818 /var/tmp/bdevperf.sock 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73818 ']' 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.856 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.856 [2024-07-26 07:39:45.292400] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:19.856 [2024-07-26 07:39:45.292495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73818 ] 00:14:19.856 [2024-07-26 07:39:45.432164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.115 [2024-07-26 07:39:45.532322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.115 [2024-07-26 07:39:45.605850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:20.681 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.681 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:20.681 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:20.939 [2024-07-26 07:39:46.359790] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.939 [2024-07-26 07:39:46.359938] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:20.939 TLSTESTn1 00:14:20.940 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:21.198 Running I/O for 10 seconds... 00:14:31.170 00:14:31.170 Latency(us) 00:14:31.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.170 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:31.170 Verification LBA range: start 0x0 length 0x2000 00:14:31.170 TLSTESTn1 : 10.03 4238.96 16.56 0.00 0.00 30137.07 7745.16 19899.11 00:14:31.170 =================================================================================================================== 00:14:31.170 Total : 4238.96 16.56 0.00 0.00 30137.07 7745.16 19899.11 00:14:31.170 0 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:31.171 nvmf_trace.0 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73818 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73818 ']' 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73818 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73818 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:31.171 killing process with pid 73818 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73818' 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73818 00:14:31.171 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.171 00:14:31.171 Latency(us) 00:14:31.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.171 =================================================================================================================== 00:14:31.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.171 [2024-07-26 07:39:56.723086] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:31.171 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73818 00:14:31.429 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:31.429 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.429 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.688 rmmod nvme_tcp 00:14:31.688 rmmod nvme_fabrics 00:14:31.688 rmmod nvme_keyring 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73778 ']' 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73778 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73778 ']' 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73778 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73778 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73778' 00:14:31.688 killing process with pid 73778 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73778 00:14:31.688 [2024-07-26 07:39:57.161618] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:31.688 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73778 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:31.947 ************************************ 00:14:31.947 END TEST nvmf_fips 00:14:31.947 ************************************ 00:14:31.947 00:14:31.947 real 0m14.367s 00:14:31.947 user 0m19.312s 00:14:31.947 sys 0m5.820s 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.947 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:32.206 07:39:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:14:32.206 07:39:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:14:32.206 07:39:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:14:32.206 00:14:32.206 real 4m26.272s 00:14:32.206 user 9m12.589s 00:14:32.206 sys 1m1.260s 00:14:32.206 07:39:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.206 07:39:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.206 ************************************ 00:14:32.206 END TEST nvmf_target_extra 00:14:32.206 ************************************ 00:14:32.206 07:39:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:32.206 07:39:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.206 07:39:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.206 07:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.206 ************************************ 00:14:32.206 START TEST nvmf_host 00:14:32.206 ************************************ 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:32.206 * Looking for test storage... 00:14:32.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:32.206 07:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:32.207 ************************************ 00:14:32.207 START TEST nvmf_identify 00:14:32.207 ************************************ 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:32.207 * Looking for test storage... 00:14:32.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.207 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:32.466 Cannot find device "nvmf_tgt_br" 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:32.466 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.466 Cannot find device "nvmf_tgt_br2" 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:32.467 Cannot find device "nvmf_tgt_br" 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:32.467 Cannot find device "nvmf_tgt_br2" 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.467 07:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.467 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:32.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:32.726 00:14:32.726 --- 10.0.0.2 ping statistics --- 00:14:32.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.726 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:32.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:32.726 00:14:32.726 --- 10.0.0.3 ping statistics --- 00:14:32.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.726 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:32.726 00:14:32.726 --- 10.0.0.1 ping statistics --- 00:14:32.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.726 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74192 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74192 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74192 ']' 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.726 07:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.726 [2024-07-26 07:39:58.241691] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:32.726 [2024-07-26 07:39:58.241803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.985 [2024-07-26 07:39:58.385949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.985 [2024-07-26 07:39:58.523585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.985 [2024-07-26 07:39:58.523658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.985 [2024-07-26 07:39:58.523686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.985 [2024-07-26 07:39:58.523695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.985 [2024-07-26 07:39:58.523702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.985 [2024-07-26 07:39:58.523996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.985 [2024-07-26 07:39:58.524326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.985 [2024-07-26 07:39:58.524488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.985 [2024-07-26 07:39:58.524496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.243 [2024-07-26 07:39:58.597640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 [2024-07-26 07:39:59.235224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 Malloc0 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 [2024-07-26 07:39:59.338511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:33.810 [ 00:14:33.810 { 00:14:33.810 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.810 "subtype": "Discovery", 00:14:33.810 "listen_addresses": [ 00:14:33.810 { 00:14:33.810 "trtype": "TCP", 00:14:33.810 "adrfam": "IPv4", 00:14:33.810 "traddr": "10.0.0.2", 00:14:33.810 "trsvcid": "4420" 00:14:33.810 } 00:14:33.810 ], 00:14:33.810 "allow_any_host": true, 00:14:33.810 "hosts": [] 00:14:33.810 }, 00:14:33.810 { 00:14:33.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.810 "subtype": "NVMe", 00:14:33.810 "listen_addresses": [ 00:14:33.810 { 00:14:33.810 "trtype": "TCP", 00:14:33.810 "adrfam": "IPv4", 00:14:33.810 "traddr": "10.0.0.2", 00:14:33.810 "trsvcid": "4420" 00:14:33.810 } 00:14:33.810 ], 00:14:33.810 "allow_any_host": true, 00:14:33.810 "hosts": [], 00:14:33.810 "serial_number": "SPDK00000000000001", 00:14:33.810 "model_number": "SPDK bdev Controller", 00:14:33.810 "max_namespaces": 32, 00:14:33.810 "min_cntlid": 1, 00:14:33.810 "max_cntlid": 65519, 00:14:33.810 "namespaces": [ 00:14:33.810 { 00:14:33.810 "nsid": 1, 00:14:33.810 "bdev_name": "Malloc0", 00:14:33.810 "name": "Malloc0", 00:14:33.810 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:33.810 "eui64": "ABCDEF0123456789", 00:14:33.810 "uuid": "c598babf-ad66-471c-a452-fe581c61d3cf" 00:14:33.810 } 00:14:33.810 ] 00:14:33.810 } 00:14:33.810 ] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.810 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:33.810 [2024-07-26 07:39:59.392997] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:33.810 [2024-07-26 07:39:59.393060] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:14:34.071 [2024-07-26 07:39:59.529332] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:34.071 [2024-07-26 07:39:59.529422] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:34.071 [2024-07-26 07:39:59.529430] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:34.071 [2024-07-26 07:39:59.529442] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:34.071 [2024-07-26 07:39:59.529452] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:34.071 [2024-07-26 07:39:59.529665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:34.071 [2024-07-26 07:39:59.529726] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13792c0 0 00:14:34.072 [2024-07-26 07:39:59.541509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:34.072 [2024-07-26 07:39:59.541534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:34.072 [2024-07-26 07:39:59.541557] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:34.072 [2024-07-26 07:39:59.541561] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:34.072 [2024-07-26 07:39:59.541629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.541638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.541643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.541658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:34.072 [2024-07-26 07:39:59.541688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.549522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.549545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.549550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.549568] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:34.072 [2024-07-26 07:39:59.549577] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:34.072 [2024-07-26 07:39:59.549584] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:34.072 [2024-07-26 07:39:59.549612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.549632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.549659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.549718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.549726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.549730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.549741] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:34.072 [2024-07-26 07:39:59.549749] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:34.072 [2024-07-26 07:39:59.549758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.549774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.549792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.549842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.549850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.549854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.549865] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:34.072 [2024-07-26 07:39:59.549874] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:34.072 [2024-07-26 07:39:59.549882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.549898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.549916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.549966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.549973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.549978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.549982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.549988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:34.072 [2024-07-26 07:39:59.549998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.550015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.550032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.550077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.550084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.550088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.550098] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:34.072 [2024-07-26 07:39:59.550103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:34.072 [2024-07-26 07:39:59.550112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:34.072 [2024-07-26 07:39:59.550218] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:34.072 [2024-07-26 07:39:59.550224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:34.072 [2024-07-26 07:39:59.550235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.550251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.550269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.550322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.550330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.550334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.550344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:34.072 [2024-07-26 07:39:59.550355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.550371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.550389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.550433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.072 [2024-07-26 07:39:59.550440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.072 [2024-07-26 07:39:59.550444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.072 [2024-07-26 07:39:59.550453] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:34.072 [2024-07-26 07:39:59.550459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:34.072 [2024-07-26 07:39:59.550480] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:34.072 [2024-07-26 07:39:59.550493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:34.072 [2024-07-26 07:39:59.550505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.072 [2024-07-26 07:39:59.550518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.072 [2024-07-26 07:39:59.550539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.072 [2024-07-26 07:39:59.550664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.072 [2024-07-26 07:39:59.550675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.072 [2024-07-26 07:39:59.550680] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550684] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13792c0): datao=0, datal=4096, cccid=0 00:14:34.072 [2024-07-26 07:39:59.550689] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ba940) on tqpair(0x13792c0): expected_datao=0, payload_size=4096 00:14:34.072 [2024-07-26 07:39:59.550694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.072 [2024-07-26 07:39:59.550703] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550708] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.073 [2024-07-26 07:39:59.550725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.073 [2024-07-26 07:39:59.550729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.073 [2024-07-26 07:39:59.550743] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:34.073 [2024-07-26 07:39:59.550750] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:34.073 [2024-07-26 07:39:59.550755] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:34.073 [2024-07-26 07:39:59.550767] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:34.073 [2024-07-26 07:39:59.550772] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:34.073 [2024-07-26 07:39:59.550778] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:34.073 [2024-07-26 07:39:59.550787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:34.073 [2024-07-26 07:39:59.550796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.550813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.073 [2024-07-26 07:39:59.550836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.073 [2024-07-26 07:39:59.550893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.073 [2024-07-26 07:39:59.550901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.073 [2024-07-26 07:39:59.550905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.073 [2024-07-26 07:39:59.550918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.550933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.073 [2024-07-26 07:39:59.550940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.550954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.073 [2024-07-26 07:39:59.550960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.550974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.073 [2024-07-26 07:39:59.550980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.550988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.550994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.073 [2024-07-26 07:39:59.550999] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:34.073 [2024-07-26 07:39:59.551009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:34.073 [2024-07-26 07:39:59.551016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.551027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.073 [2024-07-26 07:39:59.551052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ba940, cid 0, qid 0 00:14:34.073 [2024-07-26 07:39:59.551060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13baac0, cid 1, qid 0 00:14:34.073 [2024-07-26 07:39:59.551066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13bac40, cid 2, qid 0 00:14:34.073 [2024-07-26 07:39:59.551071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.073 [2024-07-26 07:39:59.551076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13baf40, cid 4, qid 0 00:14:34.073 [2024-07-26 07:39:59.551161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.073 [2024-07-26 07:39:59.551168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.073 [2024-07-26 07:39:59.551172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13baf40) on tqpair=0x13792c0 00:14:34.073 [2024-07-26 07:39:59.551183] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:34.073 [2024-07-26 07:39:59.551189] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:34.073 [2024-07-26 07:39:59.551202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.551215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.073 [2024-07-26 07:39:59.551234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13baf40, cid 4, qid 0 00:14:34.073 [2024-07-26 07:39:59.551297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.073 [2024-07-26 07:39:59.551305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.073 [2024-07-26 07:39:59.551309] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551313] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13792c0): datao=0, datal=4096, cccid=4 00:14:34.073 [2024-07-26 07:39:59.551318] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13baf40) on tqpair(0x13792c0): expected_datao=0, payload_size=4096 00:14:34.073 [2024-07-26 07:39:59.551322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551330] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551335] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.073 [2024-07-26 07:39:59.551350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.073 [2024-07-26 07:39:59.551354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13baf40) on tqpair=0x13792c0 00:14:34.073 [2024-07-26 07:39:59.551373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:34.073 [2024-07-26 07:39:59.551401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.551415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.073 [2024-07-26 07:39:59.551423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.551438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.073 [2024-07-26 07:39:59.551462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13baf40, cid 4, qid 0 00:14:34.073 [2024-07-26 07:39:59.551491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13bb0c0, cid 5, qid 0 00:14:34.073 [2024-07-26 07:39:59.551589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.073 [2024-07-26 07:39:59.551597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.073 [2024-07-26 07:39:59.551601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551605] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13792c0): datao=0, datal=1024, cccid=4 00:14:34.073 [2024-07-26 07:39:59.551610] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13baf40) on tqpair(0x13792c0): expected_datao=0, payload_size=1024 00:14:34.073 [2024-07-26 07:39:59.551615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551622] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551626] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.073 [2024-07-26 07:39:59.551639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.073 [2024-07-26 07:39:59.551643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13bb0c0) on tqpair=0x13792c0 00:14:34.073 [2024-07-26 07:39:59.551665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.073 [2024-07-26 07:39:59.551674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.073 [2024-07-26 07:39:59.551678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13baf40) on tqpair=0x13792c0 00:14:34.073 [2024-07-26 07:39:59.551696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.073 [2024-07-26 07:39:59.551701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13792c0) 00:14:34.073 [2024-07-26 07:39:59.551709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.074 [2024-07-26 07:39:59.551735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13baf40, cid 4, qid 0 00:14:34.074 [2024-07-26 07:39:59.551807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.074 [2024-07-26 07:39:59.551814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.074 [2024-07-26 07:39:59.551819] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551823] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13792c0): datao=0, datal=3072, cccid=4 00:14:34.074 [2024-07-26 07:39:59.551827] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13baf40) on tqpair(0x13792c0): expected_datao=0, payload_size=3072 00:14:34.074 [2024-07-26 07:39:59.551832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551839] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551844] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.074 [2024-07-26 07:39:59.551859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.074 [2024-07-26 07:39:59.551863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13baf40) on tqpair=0x13792c0 00:14:34.074 [2024-07-26 07:39:59.551878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13792c0) 00:14:34.074 [2024-07-26 07:39:59.551891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.074 [2024-07-26 07:39:59.551915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13baf40, cid 4, qid 0 00:14:34.074 [2024-07-26 07:39:59.551980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.074 [2024-07-26 07:39:59.551987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.074 [2024-07-26 07:39:59.551991] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.551995] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13792c0): datao=0, datal=8, cccid=4 00:14:34.074 [2024-07-26 07:39:59.552000] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13baf40) on tqpair(0x13792c0): expected_datao=0, payload_size=8 00:14:34.074 [2024-07-26 07:39:59.552005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.074 ===================================================== 00:14:34.074 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:34.074 ===================================================== 00:14:34.074 Controller Capabilities/Features 00:14:34.074 ================================ 00:14:34.074 Vendor ID: 0000 00:14:34.074 Subsystem Vendor ID: 0000 00:14:34.074 Serial Number: .................... 00:14:34.074 Model Number: ........................................ 00:14:34.074 Firmware Version: 24.09 00:14:34.074 Recommended Arb Burst: 0 00:14:34.074 IEEE OUI Identifier: 00 00 00 00:14:34.074 Multi-path I/O 00:14:34.074 May have multiple subsystem ports: No 00:14:34.074 May have multiple controllers: No 00:14:34.074 Associated with SR-IOV VF: No 00:14:34.074 Max Data Transfer Size: 131072 00:14:34.074 Max Number of Namespaces: 0 00:14:34.074 Max Number of I/O Queues: 1024 00:14:34.074 NVMe Specification Version (VS): 1.3 00:14:34.074 NVMe Specification Version (Identify): 1.3 00:14:34.074 Maximum Queue Entries: 128 00:14:34.074 Contiguous Queues Required: Yes 00:14:34.074 Arbitration Mechanisms Supported 00:14:34.074 Weighted Round Robin: Not Supported 00:14:34.074 Vendor Specific: Not Supported 00:14:34.074 Reset Timeout: 15000 ms 00:14:34.074 Doorbell Stride: 4 bytes 00:14:34.074 NVM Subsystem Reset: Not Supported 00:14:34.074 Command Sets Supported 00:14:34.074 NVM Command Set: Supported 00:14:34.074 Boot Partition: Not Supported 00:14:34.074 Memory Page Size Minimum: 4096 bytes 00:14:34.074 Memory Page Size Maximum: 4096 bytes 00:14:34.074 Persistent Memory Region: Not Supported 00:14:34.074 Optional Asynchronous Events Supported 00:14:34.074 Namespace Attribute Notices: Not Supported 00:14:34.074 Firmware Activation Notices: Not Supported 00:14:34.074 ANA Change Notices: Not Supported 00:14:34.074 PLE Aggregate Log Change Notices: Not Supported 00:14:34.074 LBA Status Info Alert Notices: Not Supported 00:14:34.074 EGE Aggregate Log Change Notices: Not Supported 00:14:34.074 Normal NVM Subsystem Shutdown event: Not Supported 00:14:34.074 Zone Descriptor Change Notices: Not Supported 00:14:34.074 Discovery Log Change Notices: Supported 00:14:34.074 Controller Attributes 00:14:34.074 128-bit Host Identifier: Not Supported 00:14:34.074 Non-Operational Permissive Mode: Not Supported 00:14:34.074 NVM Sets: Not Supported 00:14:34.074 Read Recovery Levels: Not Supported 00:14:34.074 Endurance Groups: Not Supported 00:14:34.074 Predictable Latency Mode: Not Supported 00:14:34.074 Traffic Based Keep ALive: Not Supported 00:14:34.074 Namespace Granularity: Not Supported 00:14:34.074 SQ Associations: Not Supported 00:14:34.074 UUID List: Not Supported 00:14:34.074 Multi-Domain Subsystem: Not Supported 00:14:34.074 Fixed Capacity Management: Not Supported 00:14:34.074 Variable Capacity Management: Not Supported 00:14:34.074 Delete Endurance Group: Not Supported 00:14:34.074 Delete NVM Set: Not Supported 00:14:34.074 Extended LBA Formats Supported: Not Supported 00:14:34.074 Flexible Data Placement Supported: Not Supported 00:14:34.074 00:14:34.074 Controller Memory Buffer Support 00:14:34.074 ================================ 00:14:34.074 Supported: No 00:14:34.074 00:14:34.074 Persistent Memory Region Support 00:14:34.074 ================================ 00:14:34.074 Supported: No 00:14:34.074 00:14:34.074 Admin Command Set Attributes 00:14:34.074 ============================ 00:14:34.074 Security Send/Receive: Not Supported 00:14:34.074 Format NVM: Not Supported 00:14:34.074 Firmware Activate/Download: Not Supported 00:14:34.074 Namespace Management: Not Supported 00:14:34.074 Device Self-Test: Not Supported 00:14:34.074 Directives: Not Supported 00:14:34.074 NVMe-MI: Not Supported 00:14:34.074 Virtualization Management: Not Supported 00:14:34.074 Doorbell Buffer Config: Not Supported 00:14:34.074 Get LBA Status Capability: Not Supported 00:14:34.074 Command & Feature Lockdown Capability: Not Supported 00:14:34.074 Abort Command Limit: 1 00:14:34.074 Async Event Request Limit: 4 00:14:34.074 Number of Firmware Slots: N/A 00:14:34.074 Firmware Slot 1 Read-Only: N/A 00:14:34.074 Firm[2024-07-26 07:39:59.552012] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.552016] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.552032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.074 [2024-07-26 07:39:59.552040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.074 [2024-07-26 07:39:59.552044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.074 [2024-07-26 07:39:59.552048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13baf40) on tqpair=0x13792c0 00:14:34.074 ware Activation Without Reset: N/A 00:14:34.074 Multiple Update Detection Support: N/A 00:14:34.074 Firmware Update Granularity: No Information Provided 00:14:34.074 Per-Namespace SMART Log: No 00:14:34.074 Asymmetric Namespace Access Log Page: Not Supported 00:14:34.074 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:34.074 Command Effects Log Page: Not Supported 00:14:34.074 Get Log Page Extended Data: Supported 00:14:34.074 Telemetry Log Pages: Not Supported 00:14:34.074 Persistent Event Log Pages: Not Supported 00:14:34.074 Supported Log Pages Log Page: May Support 00:14:34.074 Commands Supported & Effects Log Page: Not Supported 00:14:34.074 Feature Identifiers & Effects Log Page:May Support 00:14:34.074 NVMe-MI Commands & Effects Log Page: May Support 00:14:34.074 Data Area 4 for Telemetry Log: Not Supported 00:14:34.074 Error Log Page Entries Supported: 128 00:14:34.074 Keep Alive: Not Supported 00:14:34.074 00:14:34.074 NVM Command Set Attributes 00:14:34.074 ========================== 00:14:34.074 Submission Queue Entry Size 00:14:34.074 Max: 1 00:14:34.074 Min: 1 00:14:34.074 Completion Queue Entry Size 00:14:34.074 Max: 1 00:14:34.074 Min: 1 00:14:34.074 Number of Namespaces: 0 00:14:34.074 Compare Command: Not Supported 00:14:34.074 Write Uncorrectable Command: Not Supported 00:14:34.074 Dataset Management Command: Not Supported 00:14:34.074 Write Zeroes Command: Not Supported 00:14:34.074 Set Features Save Field: Not Supported 00:14:34.074 Reservations: Not Supported 00:14:34.074 Timestamp: Not Supported 00:14:34.074 Copy: Not Supported 00:14:34.074 Volatile Write Cache: Not Present 00:14:34.074 Atomic Write Unit (Normal): 1 00:14:34.074 Atomic Write Unit (PFail): 1 00:14:34.074 Atomic Compare & Write Unit: 1 00:14:34.074 Fused Compare & Write: Supported 00:14:34.074 Scatter-Gather List 00:14:34.074 SGL Command Set: Supported 00:14:34.074 SGL Keyed: Supported 00:14:34.074 SGL Bit Bucket Descriptor: Not Supported 00:14:34.074 SGL Metadata Pointer: Not Supported 00:14:34.074 Oversized SGL: Not Supported 00:14:34.074 SGL Metadata Address: Not Supported 00:14:34.075 SGL Offset: Supported 00:14:34.075 Transport SGL Data Block: Not Supported 00:14:34.075 Replay Protected Memory Block: Not Supported 00:14:34.075 00:14:34.075 Firmware Slot Information 00:14:34.075 ========================= 00:14:34.075 Active slot: 0 00:14:34.075 00:14:34.075 00:14:34.075 Error Log 00:14:34.075 ========= 00:14:34.075 00:14:34.075 Active Namespaces 00:14:34.075 ================= 00:14:34.075 Discovery Log Page 00:14:34.075 ================== 00:14:34.075 Generation Counter: 2 00:14:34.075 Number of Records: 2 00:14:34.075 Record Format: 0 00:14:34.075 00:14:34.075 Discovery Log Entry 0 00:14:34.075 ---------------------- 00:14:34.075 Transport Type: 3 (TCP) 00:14:34.075 Address Family: 1 (IPv4) 00:14:34.075 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:34.075 Entry Flags: 00:14:34.075 Duplicate Returned Information: 1 00:14:34.075 Explicit Persistent Connection Support for Discovery: 1 00:14:34.075 Transport Requirements: 00:14:34.075 Secure Channel: Not Required 00:14:34.075 Port ID: 0 (0x0000) 00:14:34.075 Controller ID: 65535 (0xffff) 00:14:34.075 Admin Max SQ Size: 128 00:14:34.075 Transport Service Identifier: 4420 00:14:34.075 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:34.075 Transport Address: 10.0.0.2 00:14:34.075 Discovery Log Entry 1 00:14:34.075 ---------------------- 00:14:34.075 Transport Type: 3 (TCP) 00:14:34.075 Address Family: 1 (IPv4) 00:14:34.075 Subsystem Type: 2 (NVM Subsystem) 00:14:34.075 Entry Flags: 00:14:34.075 Duplicate Returned Information: 0 00:14:34.075 Explicit Persistent Connection Support for Discovery: 0 00:14:34.075 Transport Requirements: 00:14:34.075 Secure Channel: Not Required 00:14:34.075 Port ID: 0 (0x0000) 00:14:34.075 Controller ID: 65535 (0xffff) 00:14:34.075 Admin Max SQ Size: 128 00:14:34.075 Transport Service Identifier: 4420 00:14:34.075 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:34.075 Transport Address: 10.0.0.2 [2024-07-26 07:39:59.552170] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:34.075 [2024-07-26 07:39:59.552188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ba940) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.075 [2024-07-26 07:39:59.552202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13baac0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.075 [2024-07-26 07:39:59.552212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13bac40) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.075 [2024-07-26 07:39:59.552222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.075 [2024-07-26 07:39:59.552237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.552334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.552342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.552346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.552480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.552489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.552493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552503] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:34.075 [2024-07-26 07:39:59.552509] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:34.075 [2024-07-26 07:39:59.552520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.552607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.552614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.552618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.552716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.552723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.552727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.552823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.552830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.552834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.552927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.552934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.552938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.075 [2024-07-26 07:39:59.552953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.552962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.075 [2024-07-26 07:39:59.552969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.075 [2024-07-26 07:39:59.552986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.075 [2024-07-26 07:39:59.553030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.075 [2024-07-26 07:39:59.553038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.075 [2024-07-26 07:39:59.553042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.075 [2024-07-26 07:39:59.553046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.076 [2024-07-26 07:39:59.553057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.076 [2024-07-26 07:39:59.553073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.076 [2024-07-26 07:39:59.553090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.076 [2024-07-26 07:39:59.553140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.076 [2024-07-26 07:39:59.553148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.076 [2024-07-26 07:39:59.553152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.076 [2024-07-26 07:39:59.553180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.076 [2024-07-26 07:39:59.553197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.076 [2024-07-26 07:39:59.553215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.076 [2024-07-26 07:39:59.553265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.076 [2024-07-26 07:39:59.553272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.076 [2024-07-26 07:39:59.553276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.076 [2024-07-26 07:39:59.553291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.076 [2024-07-26 07:39:59.553308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.076 [2024-07-26 07:39:59.553325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.076 [2024-07-26 07:39:59.553373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.076 [2024-07-26 07:39:59.553381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.076 [2024-07-26 07:39:59.553385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.076 [2024-07-26 07:39:59.553400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.553409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.076 [2024-07-26 07:39:59.553417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.076 [2024-07-26 07:39:59.553434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.076 [2024-07-26 07:39:59.557500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.076 [2024-07-26 07:39:59.557520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.076 [2024-07-26 07:39:59.557526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.557531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.076 [2024-07-26 07:39:59.557546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.557552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.557556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13792c0) 00:14:34.076 [2024-07-26 07:39:59.557565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.076 [2024-07-26 07:39:59.557591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13badc0, cid 3, qid 0 00:14:34.076 [2024-07-26 07:39:59.557638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.076 [2024-07-26 07:39:59.557646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.076 [2024-07-26 07:39:59.557650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.076 [2024-07-26 07:39:59.557654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13badc0) on tqpair=0x13792c0 00:14:34.076 [2024-07-26 07:39:59.557663] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:34.076 00:14:34.076 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:34.076 [2024-07-26 07:39:59.596660] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:34.076 [2024-07-26 07:39:59.596709] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74236 ] 00:14:34.337 [2024-07-26 07:39:59.736875] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:34.337 [2024-07-26 07:39:59.736952] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:34.337 [2024-07-26 07:39:59.736959] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:34.337 [2024-07-26 07:39:59.736970] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:34.337 [2024-07-26 07:39:59.736978] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:34.337 [2024-07-26 07:39:59.737078] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:34.337 [2024-07-26 07:39:59.737135] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c8b2c0 0 00:14:34.337 [2024-07-26 07:39:59.746520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:34.337 [2024-07-26 07:39:59.746542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:34.337 [2024-07-26 07:39:59.746565] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:34.337 [2024-07-26 07:39:59.746569] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:34.337 [2024-07-26 07:39:59.746608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.746616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.746620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.337 [2024-07-26 07:39:59.746632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:34.337 [2024-07-26 07:39:59.746661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.337 [2024-07-26 07:39:59.754533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.337 [2024-07-26 07:39:59.754552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.337 [2024-07-26 07:39:59.754558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.754563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.337 [2024-07-26 07:39:59.754573] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:34.337 [2024-07-26 07:39:59.754582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:34.337 [2024-07-26 07:39:59.754589] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:34.337 [2024-07-26 07:39:59.754606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.754611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.754616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.337 [2024-07-26 07:39:59.754626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.337 [2024-07-26 07:39:59.754661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.337 [2024-07-26 07:39:59.754719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.337 [2024-07-26 07:39:59.754726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.337 [2024-07-26 07:39:59.754731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.754735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.337 [2024-07-26 07:39:59.754741] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:34.337 [2024-07-26 07:39:59.754750] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:34.337 [2024-07-26 07:39:59.754758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.754763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.754767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.337 [2024-07-26 07:39:59.754775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.337 [2024-07-26 07:39:59.754795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.337 [2024-07-26 07:39:59.755161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.337 [2024-07-26 07:39:59.755177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.337 [2024-07-26 07:39:59.755182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.337 [2024-07-26 07:39:59.755193] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:34.337 [2024-07-26 07:39:59.755203] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:34.337 [2024-07-26 07:39:59.755212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.337 [2024-07-26 07:39:59.755229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.337 [2024-07-26 07:39:59.755249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.337 [2024-07-26 07:39:59.755365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.337 [2024-07-26 07:39:59.755372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.337 [2024-07-26 07:39:59.755376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.337 [2024-07-26 07:39:59.755387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:34.337 [2024-07-26 07:39:59.755398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.337 [2024-07-26 07:39:59.755416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.337 [2024-07-26 07:39:59.755434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.337 [2024-07-26 07:39:59.755545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.337 [2024-07-26 07:39:59.755554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.337 [2024-07-26 07:39:59.755559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.337 [2024-07-26 07:39:59.755563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.337 [2024-07-26 07:39:59.755569] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:34.337 [2024-07-26 07:39:59.755575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:34.337 [2024-07-26 07:39:59.755584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:34.337 [2024-07-26 07:39:59.755691] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:34.338 [2024-07-26 07:39:59.755696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:34.338 [2024-07-26 07:39:59.755705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.755710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.755715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.755723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.338 [2024-07-26 07:39:59.755745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.338 [2024-07-26 07:39:59.755848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.338 [2024-07-26 07:39:59.755864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.338 [2024-07-26 07:39:59.755868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.755873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.338 [2024-07-26 07:39:59.755879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:34.338 [2024-07-26 07:39:59.755891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.755896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.755900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.755908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.338 [2024-07-26 07:39:59.755927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.338 [2024-07-26 07:39:59.756242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.338 [2024-07-26 07:39:59.756258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.338 [2024-07-26 07:39:59.756263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.338 [2024-07-26 07:39:59.756273] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:34.338 [2024-07-26 07:39:59.756278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.756288] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:34.338 [2024-07-26 07:39:59.756299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.756310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.756323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.338 [2024-07-26 07:39:59.756344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.338 [2024-07-26 07:39:59.756652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.338 [2024-07-26 07:39:59.756672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.338 [2024-07-26 07:39:59.756677] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756682] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=4096, cccid=0 00:14:34.338 [2024-07-26 07:39:59.756687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccc940) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=4096 00:14:34.338 [2024-07-26 07:39:59.756693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756702] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756707] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.338 [2024-07-26 07:39:59.756723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.338 [2024-07-26 07:39:59.756727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.338 [2024-07-26 07:39:59.756741] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:34.338 [2024-07-26 07:39:59.756747] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:34.338 [2024-07-26 07:39:59.756753] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:34.338 [2024-07-26 07:39:59.756764] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:34.338 [2024-07-26 07:39:59.756770] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:34.338 [2024-07-26 07:39:59.756775] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.756786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.756795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.756804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.756813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.338 [2024-07-26 07:39:59.756839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.338 [2024-07-26 07:39:59.757113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.338 [2024-07-26 07:39:59.757128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.338 [2024-07-26 07:39:59.757133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.338 [2024-07-26 07:39:59.757146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.757172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.338 [2024-07-26 07:39:59.757179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.757194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.338 [2024-07-26 07:39:59.757201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.757216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.338 [2024-07-26 07:39:59.757223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.757238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.338 [2024-07-26 07:39:59.757244] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.757254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.757262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.757274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.338 [2024-07-26 07:39:59.757301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccc940, cid 0, qid 0 00:14:34.338 [2024-07-26 07:39:59.757309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccac0, cid 1, qid 0 00:14:34.338 [2024-07-26 07:39:59.757315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccc40, cid 2, qid 0 00:14:34.338 [2024-07-26 07:39:59.757321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccdc0, cid 3, qid 0 00:14:34.338 [2024-07-26 07:39:59.757326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.338 [2024-07-26 07:39:59.757439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.338 [2024-07-26 07:39:59.757446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.338 [2024-07-26 07:39:59.757450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.338 [2024-07-26 07:39:59.757461] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:34.338 [2024-07-26 07:39:59.757482] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.757493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.757501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:34.338 [2024-07-26 07:39:59.757509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.338 [2024-07-26 07:39:59.757518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.338 [2024-07-26 07:39:59.757527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.338 [2024-07-26 07:39:59.757548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.338 [2024-07-26 07:39:59.757907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.338 [2024-07-26 07:39:59.757922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.757928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.757932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.758001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.758015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.758025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.758029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.758038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.339 [2024-07-26 07:39:59.758059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.339 [2024-07-26 07:39:59.758338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.339 [2024-07-26 07:39:59.758353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.339 [2024-07-26 07:39:59.758358] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.758362] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=4096, cccid=4 00:14:34.339 [2024-07-26 07:39:59.758368] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cccf40) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=4096 00:14:34.339 [2024-07-26 07:39:59.758373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.758381] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.758385] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.758436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.758442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.758446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.758451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.758464] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:34.339 [2024-07-26 07:39:59.762559] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.762577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.762589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.762604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.339 [2024-07-26 07:39:59.762631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.339 [2024-07-26 07:39:59.762713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.339 [2024-07-26 07:39:59.762720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.339 [2024-07-26 07:39:59.762724] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762729] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=4096, cccid=4 00:14:34.339 [2024-07-26 07:39:59.762734] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cccf40) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=4096 00:14:34.339 [2024-07-26 07:39:59.762739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762747] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762751] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.762767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.762771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.762793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.762806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.762816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.762820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.762828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.339 [2024-07-26 07:39:59.762849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.339 [2024-07-26 07:39:59.763166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.339 [2024-07-26 07:39:59.763182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.339 [2024-07-26 07:39:59.763187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763191] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=4096, cccid=4 00:14:34.339 [2024-07-26 07:39:59.763197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cccf40) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=4096 00:14:34.339 [2024-07-26 07:39:59.763202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763215] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.763231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.763235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.763249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763271] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763298] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:34.339 [2024-07-26 07:39:59.763303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:34.339 [2024-07-26 07:39:59.763310] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:34.339 [2024-07-26 07:39:59.763326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.763340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.339 [2024-07-26 07:39:59.763347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.763363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.339 [2024-07-26 07:39:59.763390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.339 [2024-07-26 07:39:59.763398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd0c0, cid 5, qid 0 00:14:34.339 [2024-07-26 07:39:59.763674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.763691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.763696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.763708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.763715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.763719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd0c0) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.763735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.763740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.763748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.339 [2024-07-26 07:39:59.763770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd0c0, cid 5, qid 0 00:14:34.339 [2024-07-26 07:39:59.764011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.764025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.764030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.764035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd0c0) on tqpair=0x1c8b2c0 00:14:34.339 [2024-07-26 07:39:59.764047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.339 [2024-07-26 07:39:59.764052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8b2c0) 00:14:34.339 [2024-07-26 07:39:59.764059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.339 [2024-07-26 07:39:59.764079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd0c0, cid 5, qid 0 00:14:34.339 [2024-07-26 07:39:59.764198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.339 [2024-07-26 07:39:59.764206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.339 [2024-07-26 07:39:59.764210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd0c0) on tqpair=0x1c8b2c0 00:14:34.340 [2024-07-26 07:39:59.764226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8b2c0) 00:14:34.340 [2024-07-26 07:39:59.764238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.340 [2024-07-26 07:39:59.764255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd0c0, cid 5, qid 0 00:14:34.340 [2024-07-26 07:39:59.764488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.340 [2024-07-26 07:39:59.764504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.340 [2024-07-26 07:39:59.764509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd0c0) on tqpair=0x1c8b2c0 00:14:34.340 [2024-07-26 07:39:59.764534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8b2c0) 00:14:34.340 [2024-07-26 07:39:59.764549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.340 [2024-07-26 07:39:59.764558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8b2c0) 00:14:34.340 [2024-07-26 07:39:59.764569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.340 [2024-07-26 07:39:59.764577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c8b2c0) 00:14:34.340 [2024-07-26 07:39:59.764589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.340 [2024-07-26 07:39:59.764597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.764601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c8b2c0) 00:14:34.340 [2024-07-26 07:39:59.764608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.340 [2024-07-26 07:39:59.764632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd0c0, cid 5, qid 0 00:14:34.340 [2024-07-26 07:39:59.764640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccf40, cid 4, qid 0 00:14:34.340 [2024-07-26 07:39:59.764646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd240, cid 6, qid 0 00:14:34.340 [2024-07-26 07:39:59.764651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd3c0, cid 7, qid 0 00:14:34.340 [2024-07-26 07:39:59.765070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.340 [2024-07-26 07:39:59.765086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.340 [2024-07-26 07:39:59.765091] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765095] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=8192, cccid=5 00:14:34.340 [2024-07-26 07:39:59.765100] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccd0c0) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=8192 00:14:34.340 [2024-07-26 07:39:59.765105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765123] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765129] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.340 [2024-07-26 07:39:59.765142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.340 [2024-07-26 07:39:59.765146] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765150] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=512, cccid=4 00:14:34.340 [2024-07-26 07:39:59.765155] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cccf40) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=512 00:14:34.340 [2024-07-26 07:39:59.765170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765182] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.340 [2024-07-26 07:39:59.765194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.340 [2024-07-26 07:39:59.765198] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765202] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=512, cccid=6 00:14:34.340 [2024-07-26 07:39:59.765207] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccd240) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=512 00:14:34.340 [2024-07-26 07:39:59.765212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765222] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.340 [2024-07-26 07:39:59.765234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.340 [2024-07-26 07:39:59.765239] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765243] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8b2c0): datao=0, datal=4096, cccid=7 00:14:34.340 [2024-07-26 07:39:59.765248] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccd3c0) on tqpair(0x1c8b2c0): expected_datao=0, payload_size=4096 00:14:34.340 [2024-07-26 07:39:59.765252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765259] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765264] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.340 [2024-07-26 07:39:59.765276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.340 [2024-07-26 07:39:59.765280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd0c0) on tqpair=0x1c8b2c0 00:14:34.340 [2024-07-26 07:39:59.765302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.340 [2024-07-26 07:39:59.765309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.340 [2024-07-26 07:39:59.765313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccf40) on tqpair=0x1c8b2c0 00:14:34.340 [2024-07-26 07:39:59.765330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.340 [2024-07-26 07:39:59.765337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.340 [2024-07-26 07:39:59.765340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd240) on tqpair=0x1c8b2c0 00:14:34.340 [2024-07-26 07:39:59.765353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.340 [2024-07-26 07:39:59.765359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.340 [2024-07-26 07:39:59.765363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.340 [2024-07-26 07:39:59.765367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd3c0) on tqpair=0x1c8b2c0 00:14:34.340 ===================================================== 00:14:34.340 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.340 ===================================================== 00:14:34.340 Controller Capabilities/Features 00:14:34.340 ================================ 00:14:34.340 Vendor ID: 8086 00:14:34.340 Subsystem Vendor ID: 8086 00:14:34.340 Serial Number: SPDK00000000000001 00:14:34.340 Model Number: SPDK bdev Controller 00:14:34.340 Firmware Version: 24.09 00:14:34.340 Recommended Arb Burst: 6 00:14:34.340 IEEE OUI Identifier: e4 d2 5c 00:14:34.340 Multi-path I/O 00:14:34.340 May have multiple subsystem ports: Yes 00:14:34.340 May have multiple controllers: Yes 00:14:34.340 Associated with SR-IOV VF: No 00:14:34.340 Max Data Transfer Size: 131072 00:14:34.340 Max Number of Namespaces: 32 00:14:34.340 Max Number of I/O Queues: 127 00:14:34.340 NVMe Specification Version (VS): 1.3 00:14:34.340 NVMe Specification Version (Identify): 1.3 00:14:34.340 Maximum Queue Entries: 128 00:14:34.340 Contiguous Queues Required: Yes 00:14:34.340 Arbitration Mechanisms Supported 00:14:34.340 Weighted Round Robin: Not Supported 00:14:34.340 Vendor Specific: Not Supported 00:14:34.340 Reset Timeout: 15000 ms 00:14:34.340 Doorbell Stride: 4 bytes 00:14:34.340 NVM Subsystem Reset: Not Supported 00:14:34.340 Command Sets Supported 00:14:34.340 NVM Command Set: Supported 00:14:34.340 Boot Partition: Not Supported 00:14:34.340 Memory Page Size Minimum: 4096 bytes 00:14:34.340 Memory Page Size Maximum: 4096 bytes 00:14:34.340 Persistent Memory Region: Not Supported 00:14:34.340 Optional Asynchronous Events Supported 00:14:34.340 Namespace Attribute Notices: Supported 00:14:34.340 Firmware Activation Notices: Not Supported 00:14:34.340 ANA Change Notices: Not Supported 00:14:34.340 PLE Aggregate Log Change Notices: Not Supported 00:14:34.340 LBA Status Info Alert Notices: Not Supported 00:14:34.340 EGE Aggregate Log Change Notices: Not Supported 00:14:34.340 Normal NVM Subsystem Shutdown event: Not Supported 00:14:34.340 Zone Descriptor Change Notices: Not Supported 00:14:34.340 Discovery Log Change Notices: Not Supported 00:14:34.340 Controller Attributes 00:14:34.340 128-bit Host Identifier: Supported 00:14:34.340 Non-Operational Permissive Mode: Not Supported 00:14:34.340 NVM Sets: Not Supported 00:14:34.340 Read Recovery Levels: Not Supported 00:14:34.340 Endurance Groups: Not Supported 00:14:34.340 Predictable Latency Mode: Not Supported 00:14:34.340 Traffic Based Keep ALive: Not Supported 00:14:34.340 Namespace Granularity: Not Supported 00:14:34.340 SQ Associations: Not Supported 00:14:34.340 UUID List: Not Supported 00:14:34.341 Multi-Domain Subsystem: Not Supported 00:14:34.341 Fixed Capacity Management: Not Supported 00:14:34.341 Variable Capacity Management: Not Supported 00:14:34.341 Delete Endurance Group: Not Supported 00:14:34.341 Delete NVM Set: Not Supported 00:14:34.341 Extended LBA Formats Supported: Not Supported 00:14:34.341 Flexible Data Placement Supported: Not Supported 00:14:34.341 00:14:34.341 Controller Memory Buffer Support 00:14:34.341 ================================ 00:14:34.341 Supported: No 00:14:34.341 00:14:34.341 Persistent Memory Region Support 00:14:34.341 ================================ 00:14:34.341 Supported: No 00:14:34.341 00:14:34.341 Admin Command Set Attributes 00:14:34.341 ============================ 00:14:34.341 Security Send/Receive: Not Supported 00:14:34.341 Format NVM: Not Supported 00:14:34.341 Firmware Activate/Download: Not Supported 00:14:34.341 Namespace Management: Not Supported 00:14:34.341 Device Self-Test: Not Supported 00:14:34.341 Directives: Not Supported 00:14:34.341 NVMe-MI: Not Supported 00:14:34.341 Virtualization Management: Not Supported 00:14:34.341 Doorbell Buffer Config: Not Supported 00:14:34.341 Get LBA Status Capability: Not Supported 00:14:34.341 Command & Feature Lockdown Capability: Not Supported 00:14:34.341 Abort Command Limit: 4 00:14:34.341 Async Event Request Limit: 4 00:14:34.341 Number of Firmware Slots: N/A 00:14:34.341 Firmware Slot 1 Read-Only: N/A 00:14:34.341 Firmware Activation Without Reset: N/A 00:14:34.341 Multiple Update Detection Support: N/A 00:14:34.341 Firmware Update Granularity: No Information Provided 00:14:34.341 Per-Namespace SMART Log: No 00:14:34.341 Asymmetric Namespace Access Log Page: Not Supported 00:14:34.341 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:34.341 Command Effects Log Page: Supported 00:14:34.341 Get Log Page Extended Data: Supported 00:14:34.341 Telemetry Log Pages: Not Supported 00:14:34.341 Persistent Event Log Pages: Not Supported 00:14:34.341 Supported Log Pages Log Page: May Support 00:14:34.341 Commands Supported & Effects Log Page: Not Supported 00:14:34.341 Feature Identifiers & Effects Log Page:May Support 00:14:34.341 NVMe-MI Commands & Effects Log Page: May Support 00:14:34.341 Data Area 4 for Telemetry Log: Not Supported 00:14:34.341 Error Log Page Entries Supported: 128 00:14:34.341 Keep Alive: Supported 00:14:34.341 Keep Alive Granularity: 10000 ms 00:14:34.341 00:14:34.341 NVM Command Set Attributes 00:14:34.341 ========================== 00:14:34.341 Submission Queue Entry Size 00:14:34.341 Max: 64 00:14:34.341 Min: 64 00:14:34.341 Completion Queue Entry Size 00:14:34.341 Max: 16 00:14:34.341 Min: 16 00:14:34.341 Number of Namespaces: 32 00:14:34.341 Compare Command: Supported 00:14:34.341 Write Uncorrectable Command: Not Supported 00:14:34.341 Dataset Management Command: Supported 00:14:34.341 Write Zeroes Command: Supported 00:14:34.341 Set Features Save Field: Not Supported 00:14:34.341 Reservations: Supported 00:14:34.341 Timestamp: Not Supported 00:14:34.341 Copy: Supported 00:14:34.341 Volatile Write Cache: Present 00:14:34.341 Atomic Write Unit (Normal): 1 00:14:34.341 Atomic Write Unit (PFail): 1 00:14:34.341 Atomic Compare & Write Unit: 1 00:14:34.341 Fused Compare & Write: Supported 00:14:34.341 Scatter-Gather List 00:14:34.341 SGL Command Set: Supported 00:14:34.341 SGL Keyed: Supported 00:14:34.341 SGL Bit Bucket Descriptor: Not Supported 00:14:34.341 SGL Metadata Pointer: Not Supported 00:14:34.341 Oversized SGL: Not Supported 00:14:34.341 SGL Metadata Address: Not Supported 00:14:34.341 SGL Offset: Supported 00:14:34.341 Transport SGL Data Block: Not Supported 00:14:34.341 Replay Protected Memory Block: Not Supported 00:14:34.341 00:14:34.341 Firmware Slot Information 00:14:34.341 ========================= 00:14:34.341 Active slot: 1 00:14:34.341 Slot 1 Firmware Revision: 24.09 00:14:34.341 00:14:34.341 00:14:34.341 Commands Supported and Effects 00:14:34.341 ============================== 00:14:34.341 Admin Commands 00:14:34.341 -------------- 00:14:34.341 Get Log Page (02h): Supported 00:14:34.341 Identify (06h): Supported 00:14:34.341 Abort (08h): Supported 00:14:34.341 Set Features (09h): Supported 00:14:34.341 Get Features (0Ah): Supported 00:14:34.341 Asynchronous Event Request (0Ch): Supported 00:14:34.341 Keep Alive (18h): Supported 00:14:34.341 I/O Commands 00:14:34.341 ------------ 00:14:34.341 Flush (00h): Supported LBA-Change 00:14:34.341 Write (01h): Supported LBA-Change 00:14:34.341 Read (02h): Supported 00:14:34.341 Compare (05h): Supported 00:14:34.341 Write Zeroes (08h): Supported LBA-Change 00:14:34.341 Dataset Management (09h): Supported LBA-Change 00:14:34.341 Copy (19h): Supported LBA-Change 00:14:34.341 00:14:34.341 Error Log 00:14:34.341 ========= 00:14:34.341 00:14:34.341 Arbitration 00:14:34.341 =========== 00:14:34.341 Arbitration Burst: 1 00:14:34.341 00:14:34.341 Power Management 00:14:34.341 ================ 00:14:34.341 Number of Power States: 1 00:14:34.341 Current Power State: Power State #0 00:14:34.341 Power State #0: 00:14:34.341 Max Power: 0.00 W 00:14:34.341 Non-Operational State: Operational 00:14:34.341 Entry Latency: Not Reported 00:14:34.341 Exit Latency: Not Reported 00:14:34.341 Relative Read Throughput: 0 00:14:34.341 Relative Read Latency: 0 00:14:34.341 Relative Write Throughput: 0 00:14:34.341 Relative Write Latency: 0 00:14:34.341 Idle Power: Not Reported 00:14:34.341 Active Power: Not Reported 00:14:34.341 Non-Operational Permissive Mode: Not Supported 00:14:34.341 00:14:34.341 Health Information 00:14:34.341 ================== 00:14:34.341 Critical Warnings: 00:14:34.341 Available Spare Space: OK 00:14:34.341 Temperature: OK 00:14:34.341 Device Reliability: OK 00:14:34.341 Read Only: No 00:14:34.341 Volatile Memory Backup: OK 00:14:34.341 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:34.341 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:34.341 Available Spare: 0% 00:14:34.341 Available Spare Threshold: 0% 00:14:34.341 Life Percentage Used:[2024-07-26 07:39:59.765485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.341 [2024-07-26 07:39:59.765494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c8b2c0) 00:14:34.341 [2024-07-26 07:39:59.765503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.341 [2024-07-26 07:39:59.765529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccd3c0, cid 7, qid 0 00:14:34.341 [2024-07-26 07:39:59.765681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.341 [2024-07-26 07:39:59.765689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.341 [2024-07-26 07:39:59.765693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.341 [2024-07-26 07:39:59.765697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccd3c0) on tqpair=0x1c8b2c0 00:14:34.341 [2024-07-26 07:39:59.765756] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:34.341 [2024-07-26 07:39:59.765773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccc940) on tqpair=0x1c8b2c0 00:14:34.341 [2024-07-26 07:39:59.765780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.341 [2024-07-26 07:39:59.765787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccac0) on tqpair=0x1c8b2c0 00:14:34.341 [2024-07-26 07:39:59.765792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.341 [2024-07-26 07:39:59.765798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccc40) on tqpair=0x1c8b2c0 00:14:34.341 [2024-07-26 07:39:59.765803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.341 [2024-07-26 07:39:59.765808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccdc0) on tqpair=0x1c8b2c0 00:14:34.341 [2024-07-26 07:39:59.765814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.342 [2024-07-26 07:39:59.765824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.765830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.765834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8b2c0) 00:14:34.342 [2024-07-26 07:39:59.765842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.342 [2024-07-26 07:39:59.765869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccdc0, cid 3, qid 0 00:14:34.342 [2024-07-26 07:39:59.766172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.342 [2024-07-26 07:39:59.766189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.342 [2024-07-26 07:39:59.766194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.766199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccdc0) on tqpair=0x1c8b2c0 00:14:34.342 [2024-07-26 07:39:59.766208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.766213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.766218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8b2c0) 00:14:34.342 [2024-07-26 07:39:59.766226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.342 [2024-07-26 07:39:59.766249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccdc0, cid 3, qid 0 00:14:34.342 [2024-07-26 07:39:59.770509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.342 [2024-07-26 07:39:59.770530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.342 [2024-07-26 07:39:59.770536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.770557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccdc0) on tqpair=0x1c8b2c0 00:14:34.342 [2024-07-26 07:39:59.770564] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:34.342 [2024-07-26 07:39:59.770570] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:34.342 [2024-07-26 07:39:59.770584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.770590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.770594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8b2c0) 00:14:34.342 [2024-07-26 07:39:59.770603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.342 [2024-07-26 07:39:59.770630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cccdc0, cid 3, qid 0 00:14:34.342 [2024-07-26 07:39:59.770685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.342 [2024-07-26 07:39:59.770692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.342 [2024-07-26 07:39:59.770696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.342 [2024-07-26 07:39:59.770701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cccdc0) on tqpair=0x1c8b2c0 00:14:34.342 [2024-07-26 07:39:59.770710] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:14:34.342 0% 00:14:34.342 Data Units Read: 0 00:14:34.342 Data Units Written: 0 00:14:34.342 Host Read Commands: 0 00:14:34.342 Host Write Commands: 0 00:14:34.342 Controller Busy Time: 0 minutes 00:14:34.342 Power Cycles: 0 00:14:34.342 Power On Hours: 0 hours 00:14:34.342 Unsafe Shutdowns: 0 00:14:34.342 Unrecoverable Media Errors: 0 00:14:34.342 Lifetime Error Log Entries: 0 00:14:34.342 Warning Temperature Time: 0 minutes 00:14:34.342 Critical Temperature Time: 0 minutes 00:14:34.342 00:14:34.342 Number of Queues 00:14:34.342 ================ 00:14:34.342 Number of I/O Submission Queues: 127 00:14:34.342 Number of I/O Completion Queues: 127 00:14:34.342 00:14:34.342 Active Namespaces 00:14:34.342 ================= 00:14:34.342 Namespace ID:1 00:14:34.342 Error Recovery Timeout: Unlimited 00:14:34.342 Command Set Identifier: NVM (00h) 00:14:34.342 Deallocate: Supported 00:14:34.342 Deallocated/Unwritten Error: Not Supported 00:14:34.342 Deallocated Read Value: Unknown 00:14:34.342 Deallocate in Write Zeroes: Not Supported 00:14:34.342 Deallocated Guard Field: 0xFFFF 00:14:34.342 Flush: Supported 00:14:34.342 Reservation: Supported 00:14:34.342 Namespace Sharing Capabilities: Multiple Controllers 00:14:34.342 Size (in LBAs): 131072 (0GiB) 00:14:34.342 Capacity (in LBAs): 131072 (0GiB) 00:14:34.342 Utilization (in LBAs): 131072 (0GiB) 00:14:34.342 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:34.342 EUI64: ABCDEF0123456789 00:14:34.342 UUID: c598babf-ad66-471c-a452-fe581c61d3cf 00:14:34.342 Thin Provisioning: Not Supported 00:14:34.342 Per-NS Atomic Units: Yes 00:14:34.342 Atomic Boundary Size (Normal): 0 00:14:34.342 Atomic Boundary Size (PFail): 0 00:14:34.342 Atomic Boundary Offset: 0 00:14:34.342 Maximum Single Source Range Length: 65535 00:14:34.342 Maximum Copy Length: 65535 00:14:34.342 Maximum Source Range Count: 1 00:14:34.342 NGUID/EUI64 Never Reused: No 00:14:34.342 Namespace Write Protected: No 00:14:34.342 Number of LBA Formats: 1 00:14:34.342 Current LBA Format: LBA Format #00 00:14:34.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:34.342 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.342 rmmod nvme_tcp 00:14:34.342 rmmod nvme_fabrics 00:14:34.342 rmmod nvme_keyring 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74192 ']' 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74192 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74192 ']' 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74192 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74192 00:14:34.342 killing process with pid 74192 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74192' 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74192 00:14:34.342 07:39:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74192 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:34.910 00:14:34.910 real 0m2.586s 00:14:34.910 user 0m7.080s 00:14:34.910 sys 0m0.682s 00:14:34.910 ************************************ 00:14:34.910 END TEST nvmf_identify 00:14:34.910 ************************************ 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 ************************************ 00:14:34.910 START TEST nvmf_perf 00:14:34.910 ************************************ 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:34.910 * Looking for test storage... 00:14:34.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.910 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:34.911 Cannot find device "nvmf_tgt_br" 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.911 Cannot find device "nvmf_tgt_br2" 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:34.911 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:35.170 Cannot find device "nvmf_tgt_br" 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:35.170 Cannot find device "nvmf_tgt_br2" 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:35.170 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:35.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:35.171 00:14:35.171 --- 10.0.0.2 ping statistics --- 00:14:35.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.171 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:35.171 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.171 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:35.171 00:14:35.171 --- 10.0.0.3 ping statistics --- 00:14:35.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.171 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:35.171 00:14:35.171 --- 10.0.0.1 ping statistics --- 00:14:35.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.171 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.171 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74405 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74405 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74405 ']' 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.430 07:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:35.430 [2024-07-26 07:40:00.842677] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:35.430 [2024-07-26 07:40:00.842767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.430 [2024-07-26 07:40:00.982906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.688 [2024-07-26 07:40:01.094779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.688 [2024-07-26 07:40:01.094836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.688 [2024-07-26 07:40:01.094847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.688 [2024-07-26 07:40:01.094856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.688 [2024-07-26 07:40:01.094862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.688 [2024-07-26 07:40:01.095000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.688 [2024-07-26 07:40:01.095812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.688 [2024-07-26 07:40:01.095993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.688 [2024-07-26 07:40:01.096099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.688 [2024-07-26 07:40:01.169273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:36.258 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.258 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:14:36.258 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.258 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:36.258 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:36.515 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.515 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:36.515 07:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:36.773 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:36.774 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:37.032 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:37.032 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:37.290 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:37.290 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:37.290 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:37.290 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:37.290 07:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:37.548 [2024-07-26 07:40:03.049402] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.548 07:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.806 07:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:37.806 07:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.064 07:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:38.064 07:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:38.322 07:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.580 [2024-07-26 07:40:03.994990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.580 07:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.839 07:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:38.839 07:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:38.839 07:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:38.839 07:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:39.773 Initializing NVMe Controllers 00:14:39.773 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:39.773 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:39.773 Initialization complete. Launching workers. 00:14:39.773 ======================================================== 00:14:39.773 Latency(us) 00:14:39.773 Device Information : IOPS MiB/s Average min max 00:14:39.773 PCIE (0000:00:10.0) NSID 1 from core 0: 23689.66 92.54 1350.89 308.03 7009.91 00:14:39.773 ======================================================== 00:14:39.773 Total : 23689.66 92.54 1350.89 308.03 7009.91 00:14:39.773 00:14:39.773 07:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:41.145 Initializing NVMe Controllers 00:14:41.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:41.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:41.145 Initialization complete. Launching workers. 00:14:41.145 ======================================================== 00:14:41.145 Latency(us) 00:14:41.145 Device Information : IOPS MiB/s Average min max 00:14:41.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3795.98 14.83 262.01 100.52 5112.89 00:14:41.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8104.38 5035.15 12011.22 00:14:41.145 ======================================================== 00:14:41.145 Total : 3919.98 15.31 510.09 100.52 12011.22 00:14:41.145 00:14:41.145 07:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:42.517 Initializing NVMe Controllers 00:14:42.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:42.518 Initialization complete. Launching workers. 00:14:42.518 ======================================================== 00:14:42.518 Latency(us) 00:14:42.518 Device Information : IOPS MiB/s Average min max 00:14:42.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8859.35 34.61 3613.22 676.47 9995.85 00:14:42.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3996.35 15.61 8020.45 5011.77 15286.31 00:14:42.518 ======================================================== 00:14:42.518 Total : 12855.70 50.22 4983.26 676.47 15286.31 00:14:42.518 00:14:42.518 07:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:42.518 07:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:45.052 Initializing NVMe Controllers 00:14:45.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.052 Controller IO queue size 128, less than required. 00:14:45.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.052 Controller IO queue size 128, less than required. 00:14:45.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:45.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:45.053 Initialization complete. Launching workers. 00:14:45.053 ======================================================== 00:14:45.053 Latency(us) 00:14:45.053 Device Information : IOPS MiB/s Average min max 00:14:45.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1583.67 395.92 81356.01 40917.67 134975.75 00:14:45.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 665.86 166.47 202121.61 73735.08 324208.94 00:14:45.053 ======================================================== 00:14:45.053 Total : 2249.53 562.38 117102.63 40917.67 324208.94 00:14:45.053 00:14:45.311 07:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:45.311 Initializing NVMe Controllers 00:14:45.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.311 Controller IO queue size 128, less than required. 00:14:45.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.311 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:45.311 Controller IO queue size 128, less than required. 00:14:45.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.311 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:45.311 WARNING: Some requested NVMe devices were skipped 00:14:45.311 No valid NVMe controllers or AIO or URING devices found 00:14:45.569 07:40:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:48.098 Initializing NVMe Controllers 00:14:48.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.098 Controller IO queue size 128, less than required. 00:14:48.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.098 Controller IO queue size 128, less than required. 00:14:48.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:48.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:48.098 Initialization complete. Launching workers. 00:14:48.098 00:14:48.098 ==================== 00:14:48.098 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:48.098 TCP transport: 00:14:48.098 polls: 11145 00:14:48.098 idle_polls: 8302 00:14:48.098 sock_completions: 2843 00:14:48.098 nvme_completions: 5369 00:14:48.098 submitted_requests: 8048 00:14:48.098 queued_requests: 1 00:14:48.098 00:14:48.098 ==================== 00:14:48.098 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:48.098 TCP transport: 00:14:48.098 polls: 10497 00:14:48.098 idle_polls: 6942 00:14:48.098 sock_completions: 3555 00:14:48.098 nvme_completions: 6147 00:14:48.098 submitted_requests: 9274 00:14:48.098 queued_requests: 1 00:14:48.098 ======================================================== 00:14:48.098 Latency(us) 00:14:48.098 Device Information : IOPS MiB/s Average min max 00:14:48.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1341.92 335.48 97996.86 43148.13 166044.05 00:14:48.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1536.41 384.10 82908.46 37008.68 155434.85 00:14:48.098 ======================================================== 00:14:48.098 Total : 2878.33 719.58 89942.90 37008.68 166044.05 00:14:48.098 00:14:48.098 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:48.098 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.425 rmmod nvme_tcp 00:14:48.425 rmmod nvme_fabrics 00:14:48.425 rmmod nvme_keyring 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74405 ']' 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74405 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74405 ']' 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74405 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74405 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.425 killing process with pid 74405 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74405' 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74405 00:14:48.425 07:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74405 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:48.992 00:14:48.992 real 0m14.153s 00:14:48.992 user 0m51.655s 00:14:48.992 sys 0m4.185s 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:48.992 ************************************ 00:14:48.992 END TEST nvmf_perf 00:14:48.992 ************************************ 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:48.992 ************************************ 00:14:48.992 START TEST nvmf_fio_host 00:14:48.992 ************************************ 00:14:48.992 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:49.252 * Looking for test storage... 00:14:49.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.252 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:49.253 Cannot find device "nvmf_tgt_br" 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.253 Cannot find device "nvmf_tgt_br2" 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:49.253 Cannot find device "nvmf_tgt_br" 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:49.253 Cannot find device "nvmf_tgt_br2" 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.253 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:49.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:49.512 00:14:49.512 --- 10.0.0.2 ping statistics --- 00:14:49.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.512 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:49.512 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.512 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:49.512 00:14:49.512 --- 10.0.0.3 ping statistics --- 00:14:49.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.512 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:49.512 00:14:49.512 --- 10.0.0.1 ping statistics --- 00:14:49.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.512 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.512 07:40:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.512 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:49.512 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:49.512 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74810 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74810 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74810 ']' 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.513 07:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:49.513 [2024-07-26 07:40:15.067096] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:49.513 [2024-07-26 07:40:15.067180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.772 [2024-07-26 07:40:15.203685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.772 [2024-07-26 07:40:15.324036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.772 [2024-07-26 07:40:15.324355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.772 [2024-07-26 07:40:15.324612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.772 [2024-07-26 07:40:15.324735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.772 [2024-07-26 07:40:15.324831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.772 [2024-07-26 07:40:15.325045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.772 [2024-07-26 07:40:15.325598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.772 [2024-07-26 07:40:15.325663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.772 [2024-07-26 07:40:15.325669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.031 [2024-07-26 07:40:15.400332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.598 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.598 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:14:50.598 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:50.857 [2024-07-26 07:40:16.338947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.857 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:50.857 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.857 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:50.857 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.116 Malloc1 00:14:51.116 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:51.683 07:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.683 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.942 [2024-07-26 07:40:17.436857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.942 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:52.200 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:52.201 07:40:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:52.460 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:52.460 fio-3.35 00:14:52.460 Starting 1 thread 00:14:54.993 00:14:54.993 test: (groupid=0, jobs=1): err= 0: pid=74893: Fri Jul 26 07:40:20 2024 00:14:54.993 read: IOPS=9151, BW=35.7MiB/s (37.5MB/s)(71.8MiB/2007msec) 00:14:54.993 slat (nsec): min=1952, max=670108, avg=2370.25, stdev=5870.49 00:14:54.993 clat (usec): min=2548, max=13678, avg=7281.44, stdev=488.61 00:14:54.993 lat (usec): min=2608, max=13680, avg=7283.81, stdev=488.36 00:14:54.993 clat percentiles (usec): 00:14:54.993 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6718], 20.00th=[ 6915], 00:14:54.993 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:14:54.993 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7832], 95.00th=[ 7963], 00:14:54.993 | 99.00th=[ 8356], 99.50th=[ 8717], 99.90th=[11076], 99.95th=[12387], 00:14:54.993 | 99.99th=[13698] 00:14:54.993 bw ( KiB/s): min=35888, max=37072, per=99.93%, avg=36581.50, stdev=507.74, samples=4 00:14:54.993 iops : min= 8972, max= 9268, avg=9145.25, stdev=126.94, samples=4 00:14:54.993 write: IOPS=9161, BW=35.8MiB/s (37.5MB/s)(71.8MiB/2007msec); 0 zone resets 00:14:54.993 slat (usec): min=2, max=262, avg= 2.42, stdev= 2.16 00:14:54.993 clat (usec): min=2410, max=13050, avg=6641.65, stdev=435.98 00:14:54.993 lat (usec): min=2424, max=13052, avg=6644.08, stdev=435.86 00:14:54.993 clat percentiles (usec): 00:14:54.993 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6325], 00:14:54.993 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:14:54.993 | 70.00th=[ 6849], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7242], 00:14:54.993 | 99.00th=[ 7570], 99.50th=[ 8029], 99.90th=[10159], 99.95th=[11076], 00:14:54.993 | 99.99th=[12387] 00:14:54.993 bw ( KiB/s): min=36472, max=36800, per=99.98%, avg=36639.50, stdev=134.86, samples=4 00:14:54.993 iops : min= 9118, max= 9200, avg=9159.75, stdev=33.69, samples=4 00:14:54.993 lat (msec) : 4=0.08%, 10=99.79%, 20=0.14% 00:14:54.993 cpu : usr=68.94%, sys=23.38%, ctx=9, majf=0, minf=7 00:14:54.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:54.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.993 issued rwts: total=18368,18388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.993 00:14:54.993 Run status group 0 (all jobs): 00:14:54.993 READ: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.8MiB (75.2MB), run=2007-2007msec 00:14:54.993 WRITE: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=71.8MiB (75.3MB), run=2007-2007msec 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:54.993 07:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:54.993 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:54.993 fio-3.35 00:14:54.993 Starting 1 thread 00:14:57.528 00:14:57.528 test: (groupid=0, jobs=1): err= 0: pid=74936: Fri Jul 26 07:40:22 2024 00:14:57.528 read: IOPS=8404, BW=131MiB/s (138MB/s)(264MiB/2007msec) 00:14:57.528 slat (usec): min=3, max=116, avg= 3.76, stdev= 1.88 00:14:57.528 clat (usec): min=2171, max=16785, avg=8585.59, stdev=2546.31 00:14:57.528 lat (usec): min=2174, max=16788, avg=8589.35, stdev=2546.35 00:14:57.528 clat percentiles (usec): 00:14:57.528 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6128], 00:14:57.528 | 30.00th=[ 6980], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9110], 00:14:57.528 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11994], 95.00th=[13173], 00:14:57.528 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16581], 99.95th=[16712], 00:14:57.528 | 99.99th=[16909] 00:14:57.528 bw ( KiB/s): min=60448, max=76896, per=50.60%, avg=68040.00, stdev=8612.28, samples=4 00:14:57.528 iops : min= 3778, max= 4806, avg=4252.50, stdev=538.27, samples=4 00:14:57.528 write: IOPS=4900, BW=76.6MiB/s (80.3MB/s)(139MiB/1819msec); 0 zone resets 00:14:57.528 slat (usec): min=35, max=325, avg=38.02, stdev= 7.03 00:14:57.528 clat (usec): min=5906, max=19839, avg=11834.31, stdev=2206.82 00:14:57.528 lat (usec): min=5943, max=19886, avg=11872.33, stdev=2206.65 00:14:57.528 clat percentiles (usec): 00:14:57.528 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10028], 00:14:57.528 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:14:57.528 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14746], 95.00th=[15926], 00:14:57.528 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:14:57.528 | 99.99th=[19792] 00:14:57.528 bw ( KiB/s): min=62368, max=79872, per=90.24%, avg=70752.00, stdev=8659.69, samples=4 00:14:57.528 iops : min= 3898, max= 4992, avg=4422.00, stdev=541.23, samples=4 00:14:57.528 lat (msec) : 4=0.59%, 10=52.37%, 20=47.03% 00:14:57.528 cpu : usr=82.70%, sys=13.46%, ctx=4, majf=0, minf=16 00:14:57.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:57.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:57.528 issued rwts: total=16867,8914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:57.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:57.528 00:14:57.528 Run status group 0 (all jobs): 00:14:57.528 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (276MB), run=2007-2007msec 00:14:57.528 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=139MiB (146MB), run=1819-1819msec 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.528 rmmod nvme_tcp 00:14:57.528 rmmod nvme_fabrics 00:14:57.528 rmmod nvme_keyring 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74810 ']' 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74810 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74810 ']' 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74810 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.528 07:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74810 00:14:57.528 killing process with pid 74810 00:14:57.528 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:57.528 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:57.528 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74810' 00:14:57.528 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74810 00:14:57.528 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74810 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.787 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:58.045 ************************************ 00:14:58.045 END TEST nvmf_fio_host 00:14:58.045 ************************************ 00:14:58.045 00:14:58.045 real 0m8.822s 00:14:58.045 user 0m36.058s 00:14:58.045 sys 0m2.430s 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.045 ************************************ 00:14:58.045 START TEST nvmf_failover 00:14:58.045 ************************************ 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:58.045 * Looking for test storage... 00:14:58.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:58.045 Cannot find device "nvmf_tgt_br" 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.045 Cannot find device "nvmf_tgt_br2" 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:58.045 Cannot find device "nvmf_tgt_br" 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:58.045 Cannot find device "nvmf_tgt_br2" 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:58.045 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:58.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:14:58.304 00:14:58.304 --- 10.0.0.2 ping statistics --- 00:14:58.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.304 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:58.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:58.304 00:14:58.304 --- 10.0.0.3 ping statistics --- 00:14:58.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.304 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:58.304 00:14:58.304 --- 10.0.0.1 ping statistics --- 00:14:58.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.304 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.304 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75149 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75149 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75149 ']' 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.564 07:40:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:58.564 [2024-07-26 07:40:23.978427] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:14:58.564 [2024-07-26 07:40:23.978758] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.564 [2024-07-26 07:40:24.129068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.825 [2024-07-26 07:40:24.253929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.825 [2024-07-26 07:40:24.254004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.825 [2024-07-26 07:40:24.254019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.825 [2024-07-26 07:40:24.254031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.825 [2024-07-26 07:40:24.254041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.825 [2024-07-26 07:40:24.254222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.825 [2024-07-26 07:40:24.254352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.825 [2024-07-26 07:40:24.254359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.825 [2024-07-26 07:40:24.333265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:59.391 07:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.391 07:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:59.391 07:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.391 07:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.391 07:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:59.649 07:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.649 07:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:59.649 [2024-07-26 07:40:25.223645] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.908 07:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:59.908 Malloc0 00:15:00.166 07:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.166 07:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.424 07:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.681 [2024-07-26 07:40:26.198451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.681 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:00.939 [2024-07-26 07:40:26.422631] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:00.939 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:01.200 [2024-07-26 07:40:26.650863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75207 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75207 /var/tmp/bdevperf.sock 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75207 ']' 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.200 07:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:02.134 07:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.134 07:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:02.134 07:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:02.392 NVMe0n1 00:15:02.392 07:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:02.650 00:15:02.907 07:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.907 07:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75232 00:15:02.907 07:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:03.840 07:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.099 [2024-07-26 07:40:29.506611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.506821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.507927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.508850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.509895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.510975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 [2024-07-26 07:40:29.511544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10a70 is same with the state(5) to be set 00:15:04.099 07:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:07.380 07:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:07.380 00:15:07.380 07:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:07.637 07:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:10.925 07:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.925 [2024-07-26 07:40:36.360689] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.925 07:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:11.860 07:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:12.119 07:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75232 00:15:18.765 0 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75207 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75207 ']' 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75207 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75207 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.765 killing process with pid 75207 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75207' 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75207 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75207 00:15:18.765 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:18.765 [2024-07-26 07:40:26.728054] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:15:18.765 [2024-07-26 07:40:26.728172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75207 ] 00:15:18.765 [2024-07-26 07:40:26.869607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.765 [2024-07-26 07:40:26.997948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.765 [2024-07-26 07:40:27.071632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:18.765 Running I/O for 15 seconds... 00:15:18.765 [2024-07-26 07:40:29.511607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.765 [2024-07-26 07:40:29.511656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.765 [2024-07-26 07:40:29.511686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.765 [2024-07-26 07:40:29.511702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.765 [2024-07-26 07:40:29.511718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.765 [2024-07-26 07:40:29.511733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.765 [2024-07-26 07:40:29.511748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.765 [2024-07-26 07:40:29.511762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.765 [2024-07-26 07:40:29.511778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.765 [2024-07-26 07:40:29.511792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.765 [2024-07-26 07:40:29.511808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.765 [2024-07-26 07:40:29.511821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.765 [2024-07-26 07:40:29.511837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.511851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.511867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.511880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.511896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.511910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.511925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.511939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.511955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.766 [2024-07-26 07:40:29.512966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.766 [2024-07-26 07:40:29.512981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.512995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.513979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.513999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.514015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.514029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.514044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.514058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.514073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.514087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.767 [2024-07-26 07:40:29.514103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.767 [2024-07-26 07:40:29.514117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.514981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.514996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.515039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.515067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.515096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.768 [2024-07-26 07:40:29.515125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.768 [2024-07-26 07:40:29.515173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.768 [2024-07-26 07:40:29.515204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.768 [2024-07-26 07:40:29.515219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.768 [2024-07-26 07:40:29.515233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:29.515604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:29.515634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2539830 is same with the state(5) to be set 00:15:18.769 [2024-07-26 07:40:29.515676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.769 [2024-07-26 07:40:29.515688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.769 [2024-07-26 07:40:29.515699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:15:18.769 [2024-07-26 07:40:29.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515783] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2539830 was disconnected and freed. reset controller. 00:15:18.769 [2024-07-26 07:40:29.515801] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:18.769 [2024-07-26 07:40:29.515859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.769 [2024-07-26 07:40:29.515880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.769 [2024-07-26 07:40:29.515908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.769 [2024-07-26 07:40:29.515936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.769 [2024-07-26 07:40:29.515963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:29.515977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:18.769 [2024-07-26 07:40:29.519862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:18.769 [2024-07-26 07:40:29.519903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca570 (9): Bad file descriptor 00:15:18.769 [2024-07-26 07:40:29.552862] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:18.769 [2024-07-26 07:40:33.106292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:33.106369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:33.106467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:33.106516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:33.106546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.769 [2024-07-26 07:40:33.106774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.769 [2024-07-26 07:40:33.106803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.769 [2024-07-26 07:40:33.106818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.106859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.106873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.106887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.106901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.106916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.106929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.106947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.106961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.106976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.106990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.107018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.107046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.107075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.107103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.770 [2024-07-26 07:40:33.107131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.770 [2024-07-26 07:40:33.107643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.770 [2024-07-26 07:40:33.107658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.107842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.107871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.107901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.107947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.107962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.107983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.771 [2024-07-26 07:40:33.108604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.771 [2024-07-26 07:40:33.108757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.771 [2024-07-26 07:40:33.108773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.108985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.108999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.772 [2024-07-26 07:40:33.109873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.772 [2024-07-26 07:40:33.109910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.772 [2024-07-26 07:40:33.109926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.109940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.109955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.109976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.109992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.773 [2024-07-26 07:40:33.110324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2528710 is same with the state(5) to be set 00:15:18.773 [2024-07-26 07:40:33.110366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.773 [2024-07-26 07:40:33.110382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.773 [2024-07-26 07:40:33.110394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:15:18.773 [2024-07-26 07:40:33.110408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110491] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2528710 was disconnected and freed. reset controller. 00:15:18.773 [2024-07-26 07:40:33.110512] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:18.773 [2024-07-26 07:40:33.110570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.773 [2024-07-26 07:40:33.110591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.773 [2024-07-26 07:40:33.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.773 [2024-07-26 07:40:33.110647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.773 [2024-07-26 07:40:33.110674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:33.110688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:18.773 [2024-07-26 07:40:33.110723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca570 (9): Bad file descriptor 00:15:18.773 [2024-07-26 07:40:33.114568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:18.773 [2024-07-26 07:40:33.156308] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:18.773 [2024-07-26 07:40:37.634726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.634803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.634835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.634852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.634869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.634884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.634900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.634915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.634930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.634968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.634985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.634999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.773 [2024-07-26 07:40:37.635220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.773 [2024-07-26 07:40:37.635234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.635561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.635982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.635997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.636011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.636040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.636069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.636098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.636135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.774 [2024-07-26 07:40:37.636164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.636223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.636252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.636280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.636309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.774 [2024-07-26 07:40:37.636326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.774 [2024-07-26 07:40:37.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.636652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.636983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.636998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.637012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.637041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.775 [2024-07-26 07:40:37.637070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.637099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.637128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.637156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.637213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.775 [2024-07-26 07:40:37.637259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.775 [2024-07-26 07:40:37.637276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.637613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.637971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.637984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.638014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:18.776 [2024-07-26 07:40:37.638043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.776 [2024-07-26 07:40:37.638255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25288f0 is same with the state(5) to be set 00:15:18.776 [2024-07-26 07:40:37.638287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.776 [2024-07-26 07:40:37.638310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.776 [2024-07-26 07:40:37.638321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39112 len:8 PRP1 0x0 PRP2 0x0 00:15:18.776 [2024-07-26 07:40:37.638334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.776 [2024-07-26 07:40:37.638359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.776 [2024-07-26 07:40:37.638369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39632 len:8 PRP1 0x0 PRP2 0x0 00:15:18.776 [2024-07-26 07:40:37.638382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.776 [2024-07-26 07:40:37.638396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39640 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39648 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39656 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39664 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39672 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39688 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39696 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39704 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39712 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39720 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.638956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39728 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.638969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.638983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.638993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.639003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39736 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.639016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.639039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.639049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39744 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:18.777 [2024-07-26 07:40:37.639087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:18.777 [2024-07-26 07:40:37.639098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39752 len:8 PRP1 0x0 PRP2 0x0 00:15:18.777 [2024-07-26 07:40:37.639111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639180] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25288f0 was disconnected and freed. reset controller. 00:15:18.777 [2024-07-26 07:40:37.639199] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:18.777 [2024-07-26 07:40:37.639259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.777 [2024-07-26 07:40:37.639280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.777 [2024-07-26 07:40:37.639309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.777 [2024-07-26 07:40:37.639336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.777 [2024-07-26 07:40:37.639375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.777 [2024-07-26 07:40:37.639388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:18.777 [2024-07-26 07:40:37.639449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca570 (9): Bad file descriptor 00:15:18.777 [2024-07-26 07:40:37.643264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:18.777 [2024-07-26 07:40:37.682382] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:18.777 00:15:18.777 Latency(us) 00:15:18.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.777 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:18.777 Verification LBA range: start 0x0 length 0x4000 00:15:18.777 NVMe0n1 : 15.01 9255.71 36.16 233.79 0.00 13456.76 629.29 21924.77 00:15:18.777 =================================================================================================================== 00:15:18.777 Total : 9255.71 36.16 233.79 0.00 13456.76 629.29 21924.77 00:15:18.777 Received shutdown signal, test time was about 15.000000 seconds 00:15:18.777 00:15:18.777 Latency(us) 00:15:18.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.777 =================================================================================================================== 00:15:18.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75409 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75409 /var/tmp/bdevperf.sock 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75409 ']' 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.778 07:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.345 07:40:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.345 07:40:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:19.345 07:40:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:19.345 [2024-07-26 07:40:44.938928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:19.603 07:40:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:19.603 [2024-07-26 07:40:45.151036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:19.603 07:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.170 NVMe0n1 00:15:20.170 07:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.170 00:15:20.428 07:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.686 00:15:20.686 07:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.686 07:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:20.945 07:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.945 07:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:24.229 07:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:24.229 07:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:24.229 07:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75486 00:15:24.229 07:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:24.229 07:40:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75486 00:15:25.605 0 00:15:25.605 07:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:25.605 [2024-07-26 07:40:43.771664] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:15:25.605 [2024-07-26 07:40:43.771803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:15:25.605 [2024-07-26 07:40:43.912123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.605 [2024-07-26 07:40:44.034822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.605 [2024-07-26 07:40:44.108208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:25.605 [2024-07-26 07:40:46.494104] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:25.605 [2024-07-26 07:40:46.494232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.605 [2024-07-26 07:40:46.494258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.605 [2024-07-26 07:40:46.494278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.605 [2024-07-26 07:40:46.494293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.605 [2024-07-26 07:40:46.494308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.605 [2024-07-26 07:40:46.494321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.605 [2024-07-26 07:40:46.494335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.605 [2024-07-26 07:40:46.494348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.605 [2024-07-26 07:40:46.494363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:25.605 [2024-07-26 07:40:46.494415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:25.605 [2024-07-26 07:40:46.494447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200c570 (9): Bad file descriptor 00:15:25.605 [2024-07-26 07:40:46.506361] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:25.605 Running I/O for 1 seconds... 00:15:25.605 00:15:25.605 Latency(us) 00:15:25.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.605 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:25.605 Verification LBA range: start 0x0 length 0x4000 00:15:25.605 NVMe0n1 : 1.02 7177.25 28.04 0.00 0.00 17759.91 2308.65 15728.64 00:15:25.605 =================================================================================================================== 00:15:25.605 Total : 7177.25 28.04 0.00 0.00 17759.91 2308.65 15728.64 00:15:25.605 07:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.605 07:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:25.605 07:40:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:26.174 07:40:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:26.174 07:40:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:26.174 07:40:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:26.433 07:40:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:29.716 07:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.716 07:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75409 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75409 ']' 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75409 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75409 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.716 killing process with pid 75409 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75409' 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75409 00:15:29.716 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75409 00:15:29.974 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:29.974 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.233 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.233 rmmod nvme_tcp 00:15:30.233 rmmod nvme_fabrics 00:15:30.492 rmmod nvme_keyring 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75149 ']' 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75149 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75149 ']' 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75149 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75149 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:30.492 killing process with pid 75149 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75149' 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75149 00:15:30.492 07:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75149 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:30.751 00:15:30.751 real 0m32.842s 00:15:30.751 user 2m6.559s 00:15:30.751 sys 0m5.737s 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.751 ************************************ 00:15:30.751 END TEST nvmf_failover 00:15:30.751 ************************************ 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.751 ************************************ 00:15:30.751 START TEST nvmf_host_discovery 00:15:30.751 ************************************ 00:15:30.751 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:31.011 * Looking for test storage... 00:15:31.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:31.011 Cannot find device "nvmf_tgt_br" 00:15:31.011 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.012 Cannot find device "nvmf_tgt_br2" 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:31.012 Cannot find device "nvmf_tgt_br" 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:31.012 Cannot find device "nvmf_tgt_br2" 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.012 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:31.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:31.271 00:15:31.271 --- 10.0.0.2 ping statistics --- 00:15:31.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.271 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:31.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:31.271 00:15:31.271 --- 10.0.0.3 ping statistics --- 00:15:31.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.271 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:31.271 00:15:31.271 --- 10.0.0.1 ping statistics --- 00:15:31.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.271 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75757 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75757 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75757 ']' 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.271 07:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.271 [2024-07-26 07:40:56.832897] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:15:31.271 [2024-07-26 07:40:56.833008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.530 [2024-07-26 07:40:56.967960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.530 [2024-07-26 07:40:57.084495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.530 [2024-07-26 07:40:57.084577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.530 [2024-07-26 07:40:57.084605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.530 [2024-07-26 07:40:57.084614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.530 [2024-07-26 07:40:57.084621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.530 [2024-07-26 07:40:57.084657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.789 [2024-07-26 07:40:57.158837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 [2024-07-26 07:40:57.833289] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 [2024-07-26 07:40:57.841415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 null0 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 null1 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75789 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75789 /tmp/host.sock 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75789 ']' 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.356 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.356 07:40:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.356 [2024-07-26 07:40:57.940399] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:15:32.356 [2024-07-26 07:40:57.940550] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75789 ] 00:15:32.614 [2024-07-26 07:40:58.080219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.614 [2024-07-26 07:40:58.213060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.872 [2024-07-26 07:40:58.290920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.438 07:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.438 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:33.438 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:33.438 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.438 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.438 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.438 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.696 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.697 [2024-07-26 07:40:59.257768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.697 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:33.955 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:15:33.956 07:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:15:34.523 [2024-07-26 07:40:59.926589] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:34.523 [2024-07-26 07:40:59.926627] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:34.523 [2024-07-26 07:40:59.926663] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:34.523 [2024-07-26 07:40:59.932646] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:34.523 [2024-07-26 07:40:59.990023] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:34.523 [2024-07-26 07:40:59.990052] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.089 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 [2024-07-26 07:41:00.836123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:35.348 [2024-07-26 07:41:00.836988] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:35.348 [2024-07-26 07:41:00.837042] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:35.348 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:35.349 [2024-07-26 07:41:00.843004] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.349 [2024-07-26 07:41:00.901293] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:35.349 [2024-07-26 07:41:00.901322] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:35.349 [2024-07-26 07:41:00.901330] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.349 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:35.607 07:41:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.607 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:35.607 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 [2024-07-26 07:41:01.069227] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:35.608 [2024-07-26 07:41:01.069259] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:35.608 [2024-07-26 07:41:01.075235] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:35.608 [2024-07-26 07:41:01.075268] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:35.608 [2024-07-26 07:41:01.075373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.608 [2024-07-26 07:41:01.075422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.608 [2024-07-26 07:41:01.075437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.608 [2024-07-26 07:41:01.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.608 [2024-07-26 07:41:01.075457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.608 [2024-07-26 07:41:01.075480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.608 [2024-07-26 07:41:01.075492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.608 [2024-07-26 07:41:01.075502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.608 [2024-07-26 07:41:01.075512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248f620 is same with the state(5) to be set 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:35.608 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.867 07:41:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 [2024-07-26 07:41:02.447186] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:37.243 [2024-07-26 07:41:02.447219] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:37.243 [2024-07-26 07:41:02.447255] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:37.243 [2024-07-26 07:41:02.453242] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:37.243 [2024-07-26 07:41:02.514082] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:37.243 [2024-07-26 07:41:02.514145] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 request: 00:15:37.243 { 00:15:37.243 "name": "nvme", 00:15:37.243 "trtype": "tcp", 00:15:37.243 "traddr": "10.0.0.2", 00:15:37.243 "adrfam": "ipv4", 00:15:37.243 "trsvcid": "8009", 00:15:37.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:37.243 "wait_for_attach": true, 00:15:37.243 "method": "bdev_nvme_start_discovery", 00:15:37.243 "req_id": 1 00:15:37.243 } 00:15:37.243 Got JSON-RPC error response 00:15:37.243 response: 00:15:37.243 { 00:15:37.243 "code": -17, 00:15:37.243 "message": "File exists" 00:15:37.243 } 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 request: 00:15:37.243 { 00:15:37.243 "name": "nvme_second", 00:15:37.243 "trtype": "tcp", 00:15:37.243 "traddr": "10.0.0.2", 00:15:37.243 "adrfam": "ipv4", 00:15:37.243 "trsvcid": "8009", 00:15:37.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:37.243 "wait_for_attach": true, 00:15:37.243 "method": "bdev_nvme_start_discovery", 00:15:37.243 "req_id": 1 00:15:37.243 } 00:15:37.243 Got JSON-RPC error response 00:15:37.243 response: 00:15:37.243 { 00:15:37.243 "code": -17, 00:15:37.243 "message": "File exists" 00:15:37.243 } 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.243 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.244 07:41:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.617 [2024-07-26 07:41:03.782564] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.617 [2024-07-26 07:41:03.782642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242ef70 with addr=10.0.0.2, port=8010 00:15:38.617 [2024-07-26 07:41:03.782686] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:38.617 [2024-07-26 07:41:03.782698] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:38.617 [2024-07-26 07:41:03.782710] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:39.183 [2024-07-26 07:41:04.782581] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:39.183 [2024-07-26 07:41:04.782634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242ef70 with addr=10.0.0.2, port=8010 00:15:39.183 [2024-07-26 07:41:04.782660] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:39.183 [2024-07-26 07:41:04.782670] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:39.183 [2024-07-26 07:41:04.782680] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:40.563 [2024-07-26 07:41:05.782389] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:40.563 request: 00:15:40.563 { 00:15:40.563 "name": "nvme_second", 00:15:40.563 "trtype": "tcp", 00:15:40.563 "traddr": "10.0.0.2", 00:15:40.563 "adrfam": "ipv4", 00:15:40.563 "trsvcid": "8010", 00:15:40.563 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:40.563 "wait_for_attach": false, 00:15:40.563 "attach_timeout_ms": 3000, 00:15:40.563 "method": "bdev_nvme_start_discovery", 00:15:40.563 "req_id": 1 00:15:40.563 } 00:15:40.563 Got JSON-RPC error response 00:15:40.563 response: 00:15:40.563 { 00:15:40.563 "code": -110, 00:15:40.563 "message": "Connection timed out" 00:15:40.563 } 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75789 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.563 rmmod nvme_tcp 00:15:40.563 rmmod nvme_fabrics 00:15:40.563 rmmod nvme_keyring 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75757 ']' 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75757 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75757 ']' 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75757 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75757 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:40.563 killing process with pid 75757 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75757' 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75757 00:15:40.563 07:41:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75757 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:40.821 00:15:40.821 real 0m10.027s 00:15:40.821 user 0m19.204s 00:15:40.821 sys 0m1.997s 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.821 ************************************ 00:15:40.821 END TEST nvmf_host_discovery 00:15:40.821 ************************************ 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.821 ************************************ 00:15:40.821 START TEST nvmf_host_multipath_status 00:15:40.821 ************************************ 00:15:40.821 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:41.080 * Looking for test storage... 00:15:41.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.080 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:41.081 Cannot find device "nvmf_tgt_br" 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.081 Cannot find device "nvmf_tgt_br2" 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:41.081 Cannot find device "nvmf_tgt_br" 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:41.081 Cannot find device "nvmf_tgt_br2" 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.081 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.348 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:41.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:41.349 00:15:41.349 --- 10.0.0.2 ping statistics --- 00:15:41.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.349 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:41.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:41.349 00:15:41.349 --- 10.0.0.3 ping statistics --- 00:15:41.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.349 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:41.349 00:15:41.349 --- 10.0.0.1 ping statistics --- 00:15:41.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.349 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:41.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76239 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76239 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76239 ']' 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.349 07:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:41.349 [2024-07-26 07:41:06.937022] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:15:41.349 [2024-07-26 07:41:06.937309] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.638 [2024-07-26 07:41:07.078433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.638 [2024-07-26 07:41:07.197535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.638 [2024-07-26 07:41:07.197852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.638 [2024-07-26 07:41:07.198001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.638 [2024-07-26 07:41:07.198129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.638 [2024-07-26 07:41:07.198170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.638 [2024-07-26 07:41:07.198404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.638 [2024-07-26 07:41:07.198413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.897 [2024-07-26 07:41:07.272333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:42.464 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76239 00:15:42.465 07:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:42.723 [2024-07-26 07:41:08.217058] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.723 07:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:42.981 Malloc0 00:15:42.981 07:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:43.239 07:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.498 07:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.757 [2024-07-26 07:41:09.169075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.757 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:44.015 [2024-07-26 07:41:09.441115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:44.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76289 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76289 /var/tmp/bdevperf.sock 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76289 ']' 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.015 07:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:44.951 07:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.951 07:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:44.951 07:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:45.209 07:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:45.467 Nvme0n1 00:15:45.467 07:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:45.726 Nvme0n1 00:15:45.985 07:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:45.985 07:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:47.886 07:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:47.886 07:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:48.143 07:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:48.401 07:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:49.336 07:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:49.336 07:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:49.336 07:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.336 07:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:49.611 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.611 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:49.611 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.611 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:49.872 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:49.872 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:49.872 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:49.872 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.130 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.130 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:50.130 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.130 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:50.388 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.388 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:50.388 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:50.388 07:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.647 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.647 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:50.647 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.647 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:50.905 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.905 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:50.905 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:51.163 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:51.421 07:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:52.356 07:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:52.356 07:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:52.356 07:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.356 07:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:52.614 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:52.614 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:52.614 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.614 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:52.873 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.873 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:52.873 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.873 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:53.131 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.131 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:53.131 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.131 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:53.389 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.389 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:53.389 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.389 07:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:53.647 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.647 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:53.647 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:53.647 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.904 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.904 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:53.905 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:54.162 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:54.420 07:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:55.352 07:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:55.352 07:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:55.352 07:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.352 07:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:55.610 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.610 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:55.610 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:55.610 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.868 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:55.868 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:55.868 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.868 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:56.125 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.125 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:56.125 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.125 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:56.383 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.383 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:56.383 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.383 07:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:56.641 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.641 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:56.641 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:56.641 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.899 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.899 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:56.899 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:57.156 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:57.414 07:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:58.787 07:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:58.787 07:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:58.787 07:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.787 07:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:58.787 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.787 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:58.787 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.787 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:59.045 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:59.045 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:59.045 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.045 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:59.302 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.302 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:59.302 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:59.302 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.560 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.560 07:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:59.560 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.560 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:59.818 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.818 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:59.818 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:59.818 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.076 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:00.076 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:00.076 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:00.334 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:00.592 07:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:01.525 07:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:01.525 07:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:01.525 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:01.525 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.783 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.783 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:01.783 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.783 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.041 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.041 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:02.041 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.041 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:02.299 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.299 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:02.299 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.299 07:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:02.558 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.558 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:02.558 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.558 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:02.816 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.816 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:02.816 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.816 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:03.075 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:03.075 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:03.075 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:03.332 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:03.590 07:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:04.526 07:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:04.526 07:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:04.526 07:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.526 07:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.784 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.784 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:04.784 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.784 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:05.043 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.043 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:05.043 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.043 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:05.301 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.301 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:05.301 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.301 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:05.560 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.560 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:05.560 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.560 07:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.818 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.818 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:05.818 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.818 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:06.076 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.076 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:06.335 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:06.335 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:06.593 07:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:06.851 07:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:07.787 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:07.787 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:07.787 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.787 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:08.044 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.044 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:08.044 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.044 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:08.302 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.302 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:08.302 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.302 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:08.559 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.559 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:08.559 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.559 07:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.816 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:09.380 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.380 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:09.380 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:09.380 07:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:09.637 07:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:11.012 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.270 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.270 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:11.270 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.270 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.529 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.529 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.529 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.529 07:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.787 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.787 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:11.787 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.787 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:12.045 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.045 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:12.045 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:12.045 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.304 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.304 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:12.304 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:12.562 07:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:12.562 07:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.937 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.195 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.195 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.195 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.195 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.454 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.454 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.454 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.454 07:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.711 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.711 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.711 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.711 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.969 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.969 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.969 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.969 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.227 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.227 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:15.227 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:15.485 07:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:15.743 07:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:16.678 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:16.678 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.678 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.678 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.936 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.936 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:16.936 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.936 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.194 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.195 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.195 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.195 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.453 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.453 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.453 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.453 07:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.712 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.712 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.712 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.712 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.970 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.970 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:17.970 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.970 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76289 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76289 ']' 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76289 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76289 00:16:18.229 killing process with pid 76289 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76289' 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76289 00:16:18.229 07:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76289 00:16:18.491 Connection closed with partial response: 00:16:18.491 00:16:18.491 00:16:18.491 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76289 00:16:18.491 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:18.491 [2024-07-26 07:41:09.508204] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:16:18.491 [2024-07-26 07:41:09.508308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76289 ] 00:16:18.491 [2024-07-26 07:41:09.639135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.491 [2024-07-26 07:41:09.753662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.491 [2024-07-26 07:41:09.826123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:18.491 Running I/O for 90 seconds... 00:16:18.491 [2024-07-26 07:41:25.704563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.704981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.491 [2024-07-26 07:41:25.705338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.491 [2024-07-26 07:41:25.705811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:18.491 [2024-07-26 07:41:25.705832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.705847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.705882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.705896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.705916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.705931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.705951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.705966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.706357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.706975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.706995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.492 [2024-07-26 07:41:25.707262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.707296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.707331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:18.492 [2024-07-26 07:41:25.707350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.492 [2024-07-26 07:41:25.707364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.707849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.707907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.707943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.707978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.707999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.493 [2024-07-26 07:41:25.708621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.708656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.708691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.708726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.708762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.493 [2024-07-26 07:41:25.708797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:18.493 [2024-07-26 07:41:25.708818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:25.708833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.708854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:25.708868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:25.709693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.709756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.709801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.709853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.709898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.709942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.709971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.709986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:25.710578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:25.710594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.143544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.143641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.143680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.143715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.143751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:41.143787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:41.143850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:41.143904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:41.143938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.143971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.143991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.144005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.144038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.144071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.144104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:41.144137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.494 [2024-07-26 07:41:41.144171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.144205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:18.494 [2024-07-26 07:41:41.144226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.494 [2024-07-26 07:41:41.144241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.144274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.144319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.144354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.144388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.144422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.144456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.144490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.144542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.144562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.144577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.145451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.145511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.145549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.145807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.145822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.146353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.146396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.146641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.495 [2024-07-26 07:41:41.146676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.495 [2024-07-26 07:41:41.146782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:18.495 [2024-07-26 07:41:41.146803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.146818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.146839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.146853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.146874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.146888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.146909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.146923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.146943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.146957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.146977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.146992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.496 [2024-07-26 07:41:41.147670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:18.496 [2024-07-26 07:41:41.147691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:18.496 [2024-07-26 07:41:41.147706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:18.496 Received shutdown signal, test time was about 32.261744 seconds 00:16:18.496 00:16:18.496 Latency(us) 00:16:18.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.496 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:18.496 Verification LBA range: start 0x0 length 0x4000 00:16:18.496 Nvme0n1 : 32.26 8761.32 34.22 0.00 0.00 14578.72 183.39 4026531.84 00:16:18.496 =================================================================================================================== 00:16:18.496 Total : 8761.32 34.22 0.00 0.00 14578.72 183.39 4026531.84 00:16:18.496 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.754 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.754 rmmod nvme_tcp 00:16:18.754 rmmod nvme_fabrics 00:16:19.013 rmmod nvme_keyring 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76239 ']' 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76239 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76239 ']' 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76239 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76239 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.013 killing process with pid 76239 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76239' 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76239 00:16:19.013 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76239 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.271 00:16:19.271 real 0m38.355s 00:16:19.271 user 2m2.809s 00:16:19.271 sys 0m11.753s 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:19.271 ************************************ 00:16:19.271 END TEST nvmf_host_multipath_status 00:16:19.271 ************************************ 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.271 ************************************ 00:16:19.271 START TEST nvmf_discovery_remove_ifc 00:16:19.271 ************************************ 00:16:19.271 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:19.591 * Looking for test storage... 00:16:19.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.592 Cannot find device "nvmf_tgt_br" 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.592 Cannot find device "nvmf_tgt_br2" 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.592 Cannot find device "nvmf_tgt_br" 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.592 Cannot find device "nvmf_tgt_br2" 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:19.592 07:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.592 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.592 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.592 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:19.592 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:19.593 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:19.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:19.885 00:16:19.885 --- 10.0.0.2 ping statistics --- 00:16:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.885 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:19.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:19.885 00:16:19.885 --- 10.0.0.3 ping statistics --- 00:16:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.885 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:19.885 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:19.886 00:16:19.886 --- 10.0.0.1 ping statistics --- 00:16:19.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.886 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77067 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77067 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77067 ']' 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.886 07:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.886 [2024-07-26 07:41:45.347406] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:16:19.886 [2024-07-26 07:41:45.347522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.144 [2024-07-26 07:41:45.485370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.144 [2024-07-26 07:41:45.612435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.144 [2024-07-26 07:41:45.612519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.144 [2024-07-26 07:41:45.612532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.145 [2024-07-26 07:41:45.612541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.145 [2024-07-26 07:41:45.612549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.145 [2024-07-26 07:41:45.612584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.145 [2024-07-26 07:41:45.686865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:20.711 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.711 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:20.711 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.711 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.711 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.969 [2024-07-26 07:41:46.354532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.969 [2024-07-26 07:41:46.362690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:20.969 null0 00:16:20.969 [2024-07-26 07:41:46.394560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77100 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77100 /tmp/host.sock 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77100 ']' 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:20.969 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.969 07:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.969 [2024-07-26 07:41:46.474289] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:16:20.969 [2024-07-26 07:41:46.474381] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77100 ] 00:16:21.228 [2024-07-26 07:41:46.615534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.228 [2024-07-26 07:41:46.743377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.161 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.161 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:22.161 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.161 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:22.161 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.162 [2024-07-26 07:41:47.549742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.162 07:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.097 [2024-07-26 07:41:48.617671] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:23.097 [2024-07-26 07:41:48.617717] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:23.097 [2024-07-26 07:41:48.617735] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:23.097 [2024-07-26 07:41:48.623716] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:23.097 [2024-07-26 07:41:48.681727] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:23.097 [2024-07-26 07:41:48.681945] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:23.097 [2024-07-26 07:41:48.682039] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:23.097 [2024-07-26 07:41:48.682185] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:23.097 [2024-07-26 07:41:48.682445] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:23.097 [2024-07-26 07:41:48.686304] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2291ef0 was disconnected and freed. delete nvme_qpair. 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:23.097 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:23.354 07:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:24.287 07:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.660 07:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.609 07:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.609 07:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:26.609 07:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:27.544 07:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.476 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.733 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.733 [2024-07-26 07:41:54.119133] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:28.733 [2024-07-26 07:41:54.119215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.733 [2024-07-26 07:41:54.119232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.733 [2024-07-26 07:41:54.119246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.733 [2024-07-26 07:41:54.119256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.733 [2024-07-26 07:41:54.119266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.733 [2024-07-26 07:41:54.119275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.733 [2024-07-26 07:41:54.119285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.733 [2024-07-26 07:41:54.119295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.733 [2024-07-26 07:41:54.119305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:28.733 [2024-07-26 07:41:54.119314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:28.733 [2024-07-26 07:41:54.119323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7ac0 is same with the state(5) to be set 00:16:28.733 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:28.733 07:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.733 [2024-07-26 07:41:54.129128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7ac0 (9): Bad file descriptor 00:16:28.733 [2024-07-26 07:41:54.139150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.666 [2024-07-26 07:41:55.147578] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:29.666 [2024-07-26 07:41:55.147881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7ac0 with addr=10.0.0.2, port=4420 00:16:29.666 [2024-07-26 07:41:55.148189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7ac0 is same with the state(5) to be set 00:16:29.666 [2024-07-26 07:41:55.148563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7ac0 (9): Bad file descriptor 00:16:29.666 [2024-07-26 07:41:55.149435] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:29.666 [2024-07-26 07:41:55.149522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:29.666 [2024-07-26 07:41:55.149544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:29.666 [2024-07-26 07:41:55.149563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:29.666 [2024-07-26 07:41:55.149600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:29.666 [2024-07-26 07:41:55.149620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.666 07:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.602 [2024-07-26 07:41:56.149681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:30.602 [2024-07-26 07:41:56.149731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:30.602 [2024-07-26 07:41:56.149760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:30.602 [2024-07-26 07:41:56.149771] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:30.602 [2024-07-26 07:41:56.149797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:30.602 [2024-07-26 07:41:56.149832] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:30.602 [2024-07-26 07:41:56.149888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.602 [2024-07-26 07:41:56.149904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.602 [2024-07-26 07:41:56.149919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.602 [2024-07-26 07:41:56.149945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.602 [2024-07-26 07:41:56.149955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.602 [2024-07-26 07:41:56.149965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.602 [2024-07-26 07:41:56.149975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.602 [2024-07-26 07:41:56.149984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.602 [2024-07-26 07:41:56.149995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.602 [2024-07-26 07:41:56.150004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.602 [2024-07-26 07:41:56.150014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:30.602 [2024-07-26 07:41:56.150035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fb860 (9): Bad file descriptor 00:16:30.602 [2024-07-26 07:41:56.150816] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:30.602 [2024-07-26 07:41:56.150834] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.602 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:30.860 07:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:31.796 07:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.731 [2024-07-26 07:41:58.162130] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:32.731 [2024-07-26 07:41:58.162157] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:32.731 [2024-07-26 07:41:58.162176] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:32.731 [2024-07-26 07:41:58.168168] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:32.731 [2024-07-26 07:41:58.224835] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:32.731 [2024-07-26 07:41:58.225038] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:32.731 [2024-07-26 07:41:58.225111] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:32.731 [2024-07-26 07:41:58.225234] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:32.731 [2024-07-26 07:41:58.225300] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:32.731 [2024-07-26 07:41:58.231041] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x226f460 was disconnected and freed. delete nvme_qpair. 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77100 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77100 ']' 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77100 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77100 00:16:32.990 killing process with pid 77100 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77100' 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77100 00:16:32.990 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77100 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.248 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.248 rmmod nvme_tcp 00:16:33.248 rmmod nvme_fabrics 00:16:33.507 rmmod nvme_keyring 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77067 ']' 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77067 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77067 ']' 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77067 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77067 00:16:33.507 killing process with pid 77067 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77067' 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77067 00:16:33.507 07:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77067 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.766 00:16:33.766 real 0m14.433s 00:16:33.766 user 0m24.834s 00:16:33.766 sys 0m2.631s 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.766 ************************************ 00:16:33.766 END TEST nvmf_discovery_remove_ifc 00:16:33.766 ************************************ 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.766 ************************************ 00:16:33.766 START TEST nvmf_identify_kernel_target 00:16:33.766 ************************************ 00:16:33.766 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:34.025 * Looking for test storage... 00:16:34.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.025 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:34.026 Cannot find device "nvmf_tgt_br" 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.026 Cannot find device "nvmf_tgt_br2" 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:34.026 Cannot find device "nvmf_tgt_br" 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:34.026 Cannot find device "nvmf_tgt_br2" 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:34.026 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:34.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:34.285 00:16:34.285 --- 10.0.0.2 ping statistics --- 00:16:34.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.285 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:34.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:34.285 00:16:34.285 --- 10.0.0.3 ping statistics --- 00:16:34.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.285 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:34.285 00:16:34.285 --- 10.0.0.1 ping statistics --- 00:16:34.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.285 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:34.285 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:34.286 07:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:34.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.544 Waiting for block devices as requested 00:16:34.802 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.802 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:34.802 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:34.802 No valid GPT data, bailing 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:35.059 No valid GPT data, bailing 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:35.059 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:35.060 No valid GPT data, bailing 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:35.060 No valid GPT data, bailing 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:35.060 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:35.318 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -a 10.0.0.1 -t tcp -s 4420 00:16:35.318 00:16:35.318 Discovery Log Number of Records 2, Generation counter 2 00:16:35.318 =====Discovery Log Entry 0====== 00:16:35.318 trtype: tcp 00:16:35.318 adrfam: ipv4 00:16:35.318 subtype: current discovery subsystem 00:16:35.318 treq: not specified, sq flow control disable supported 00:16:35.318 portid: 1 00:16:35.318 trsvcid: 4420 00:16:35.318 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:35.318 traddr: 10.0.0.1 00:16:35.318 eflags: none 00:16:35.318 sectype: none 00:16:35.318 =====Discovery Log Entry 1====== 00:16:35.318 trtype: tcp 00:16:35.318 adrfam: ipv4 00:16:35.318 subtype: nvme subsystem 00:16:35.318 treq: not specified, sq flow control disable supported 00:16:35.318 portid: 1 00:16:35.318 trsvcid: 4420 00:16:35.318 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:35.318 traddr: 10.0.0.1 00:16:35.318 eflags: none 00:16:35.318 sectype: none 00:16:35.318 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:35.318 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:35.318 ===================================================== 00:16:35.318 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:35.318 ===================================================== 00:16:35.318 Controller Capabilities/Features 00:16:35.318 ================================ 00:16:35.318 Vendor ID: 0000 00:16:35.318 Subsystem Vendor ID: 0000 00:16:35.318 Serial Number: 983add4d671999bba57c 00:16:35.318 Model Number: Linux 00:16:35.318 Firmware Version: 6.7.0-68 00:16:35.318 Recommended Arb Burst: 0 00:16:35.318 IEEE OUI Identifier: 00 00 00 00:16:35.318 Multi-path I/O 00:16:35.318 May have multiple subsystem ports: No 00:16:35.318 May have multiple controllers: No 00:16:35.318 Associated with SR-IOV VF: No 00:16:35.318 Max Data Transfer Size: Unlimited 00:16:35.318 Max Number of Namespaces: 0 00:16:35.318 Max Number of I/O Queues: 1024 00:16:35.318 NVMe Specification Version (VS): 1.3 00:16:35.318 NVMe Specification Version (Identify): 1.3 00:16:35.318 Maximum Queue Entries: 1024 00:16:35.318 Contiguous Queues Required: No 00:16:35.318 Arbitration Mechanisms Supported 00:16:35.318 Weighted Round Robin: Not Supported 00:16:35.318 Vendor Specific: Not Supported 00:16:35.318 Reset Timeout: 7500 ms 00:16:35.318 Doorbell Stride: 4 bytes 00:16:35.318 NVM Subsystem Reset: Not Supported 00:16:35.318 Command Sets Supported 00:16:35.318 NVM Command Set: Supported 00:16:35.318 Boot Partition: Not Supported 00:16:35.318 Memory Page Size Minimum: 4096 bytes 00:16:35.318 Memory Page Size Maximum: 4096 bytes 00:16:35.318 Persistent Memory Region: Not Supported 00:16:35.318 Optional Asynchronous Events Supported 00:16:35.318 Namespace Attribute Notices: Not Supported 00:16:35.318 Firmware Activation Notices: Not Supported 00:16:35.318 ANA Change Notices: Not Supported 00:16:35.318 PLE Aggregate Log Change Notices: Not Supported 00:16:35.318 LBA Status Info Alert Notices: Not Supported 00:16:35.318 EGE Aggregate Log Change Notices: Not Supported 00:16:35.318 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.318 Zone Descriptor Change Notices: Not Supported 00:16:35.318 Discovery Log Change Notices: Supported 00:16:35.318 Controller Attributes 00:16:35.318 128-bit Host Identifier: Not Supported 00:16:35.318 Non-Operational Permissive Mode: Not Supported 00:16:35.318 NVM Sets: Not Supported 00:16:35.318 Read Recovery Levels: Not Supported 00:16:35.318 Endurance Groups: Not Supported 00:16:35.318 Predictable Latency Mode: Not Supported 00:16:35.318 Traffic Based Keep ALive: Not Supported 00:16:35.318 Namespace Granularity: Not Supported 00:16:35.318 SQ Associations: Not Supported 00:16:35.318 UUID List: Not Supported 00:16:35.318 Multi-Domain Subsystem: Not Supported 00:16:35.318 Fixed Capacity Management: Not Supported 00:16:35.318 Variable Capacity Management: Not Supported 00:16:35.318 Delete Endurance Group: Not Supported 00:16:35.318 Delete NVM Set: Not Supported 00:16:35.318 Extended LBA Formats Supported: Not Supported 00:16:35.318 Flexible Data Placement Supported: Not Supported 00:16:35.318 00:16:35.318 Controller Memory Buffer Support 00:16:35.318 ================================ 00:16:35.318 Supported: No 00:16:35.318 00:16:35.318 Persistent Memory Region Support 00:16:35.318 ================================ 00:16:35.318 Supported: No 00:16:35.318 00:16:35.318 Admin Command Set Attributes 00:16:35.318 ============================ 00:16:35.318 Security Send/Receive: Not Supported 00:16:35.318 Format NVM: Not Supported 00:16:35.318 Firmware Activate/Download: Not Supported 00:16:35.318 Namespace Management: Not Supported 00:16:35.318 Device Self-Test: Not Supported 00:16:35.318 Directives: Not Supported 00:16:35.318 NVMe-MI: Not Supported 00:16:35.318 Virtualization Management: Not Supported 00:16:35.318 Doorbell Buffer Config: Not Supported 00:16:35.318 Get LBA Status Capability: Not Supported 00:16:35.318 Command & Feature Lockdown Capability: Not Supported 00:16:35.318 Abort Command Limit: 1 00:16:35.318 Async Event Request Limit: 1 00:16:35.318 Number of Firmware Slots: N/A 00:16:35.318 Firmware Slot 1 Read-Only: N/A 00:16:35.318 Firmware Activation Without Reset: N/A 00:16:35.318 Multiple Update Detection Support: N/A 00:16:35.318 Firmware Update Granularity: No Information Provided 00:16:35.318 Per-Namespace SMART Log: No 00:16:35.318 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.318 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:35.318 Command Effects Log Page: Not Supported 00:16:35.318 Get Log Page Extended Data: Supported 00:16:35.318 Telemetry Log Pages: Not Supported 00:16:35.318 Persistent Event Log Pages: Not Supported 00:16:35.318 Supported Log Pages Log Page: May Support 00:16:35.318 Commands Supported & Effects Log Page: Not Supported 00:16:35.318 Feature Identifiers & Effects Log Page:May Support 00:16:35.318 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.318 Data Area 4 for Telemetry Log: Not Supported 00:16:35.318 Error Log Page Entries Supported: 1 00:16:35.318 Keep Alive: Not Supported 00:16:35.318 00:16:35.318 NVM Command Set Attributes 00:16:35.318 ========================== 00:16:35.318 Submission Queue Entry Size 00:16:35.318 Max: 1 00:16:35.319 Min: 1 00:16:35.319 Completion Queue Entry Size 00:16:35.319 Max: 1 00:16:35.319 Min: 1 00:16:35.319 Number of Namespaces: 0 00:16:35.319 Compare Command: Not Supported 00:16:35.319 Write Uncorrectable Command: Not Supported 00:16:35.319 Dataset Management Command: Not Supported 00:16:35.319 Write Zeroes Command: Not Supported 00:16:35.319 Set Features Save Field: Not Supported 00:16:35.319 Reservations: Not Supported 00:16:35.319 Timestamp: Not Supported 00:16:35.319 Copy: Not Supported 00:16:35.319 Volatile Write Cache: Not Present 00:16:35.319 Atomic Write Unit (Normal): 1 00:16:35.319 Atomic Write Unit (PFail): 1 00:16:35.319 Atomic Compare & Write Unit: 1 00:16:35.319 Fused Compare & Write: Not Supported 00:16:35.319 Scatter-Gather List 00:16:35.319 SGL Command Set: Supported 00:16:35.319 SGL Keyed: Not Supported 00:16:35.319 SGL Bit Bucket Descriptor: Not Supported 00:16:35.319 SGL Metadata Pointer: Not Supported 00:16:35.319 Oversized SGL: Not Supported 00:16:35.319 SGL Metadata Address: Not Supported 00:16:35.319 SGL Offset: Supported 00:16:35.319 Transport SGL Data Block: Not Supported 00:16:35.319 Replay Protected Memory Block: Not Supported 00:16:35.319 00:16:35.319 Firmware Slot Information 00:16:35.319 ========================= 00:16:35.319 Active slot: 0 00:16:35.319 00:16:35.319 00:16:35.319 Error Log 00:16:35.319 ========= 00:16:35.319 00:16:35.319 Active Namespaces 00:16:35.319 ================= 00:16:35.319 Discovery Log Page 00:16:35.319 ================== 00:16:35.319 Generation Counter: 2 00:16:35.319 Number of Records: 2 00:16:35.319 Record Format: 0 00:16:35.319 00:16:35.319 Discovery Log Entry 0 00:16:35.319 ---------------------- 00:16:35.319 Transport Type: 3 (TCP) 00:16:35.319 Address Family: 1 (IPv4) 00:16:35.319 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:35.319 Entry Flags: 00:16:35.319 Duplicate Returned Information: 0 00:16:35.319 Explicit Persistent Connection Support for Discovery: 0 00:16:35.319 Transport Requirements: 00:16:35.319 Secure Channel: Not Specified 00:16:35.319 Port ID: 1 (0x0001) 00:16:35.319 Controller ID: 65535 (0xffff) 00:16:35.319 Admin Max SQ Size: 32 00:16:35.319 Transport Service Identifier: 4420 00:16:35.319 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:35.319 Transport Address: 10.0.0.1 00:16:35.319 Discovery Log Entry 1 00:16:35.319 ---------------------- 00:16:35.319 Transport Type: 3 (TCP) 00:16:35.319 Address Family: 1 (IPv4) 00:16:35.319 Subsystem Type: 2 (NVM Subsystem) 00:16:35.319 Entry Flags: 00:16:35.319 Duplicate Returned Information: 0 00:16:35.319 Explicit Persistent Connection Support for Discovery: 0 00:16:35.319 Transport Requirements: 00:16:35.319 Secure Channel: Not Specified 00:16:35.319 Port ID: 1 (0x0001) 00:16:35.319 Controller ID: 65535 (0xffff) 00:16:35.319 Admin Max SQ Size: 32 00:16:35.319 Transport Service Identifier: 4420 00:16:35.319 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:35.319 Transport Address: 10.0.0.1 00:16:35.319 07:42:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:35.576 get_feature(0x01) failed 00:16:35.576 get_feature(0x02) failed 00:16:35.576 get_feature(0x04) failed 00:16:35.576 ===================================================== 00:16:35.576 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:35.576 ===================================================== 00:16:35.576 Controller Capabilities/Features 00:16:35.576 ================================ 00:16:35.576 Vendor ID: 0000 00:16:35.577 Subsystem Vendor ID: 0000 00:16:35.577 Serial Number: e19f4770ec0a66583b42 00:16:35.577 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:35.577 Firmware Version: 6.7.0-68 00:16:35.577 Recommended Arb Burst: 6 00:16:35.577 IEEE OUI Identifier: 00 00 00 00:16:35.577 Multi-path I/O 00:16:35.577 May have multiple subsystem ports: Yes 00:16:35.577 May have multiple controllers: Yes 00:16:35.577 Associated with SR-IOV VF: No 00:16:35.577 Max Data Transfer Size: Unlimited 00:16:35.577 Max Number of Namespaces: 1024 00:16:35.577 Max Number of I/O Queues: 128 00:16:35.577 NVMe Specification Version (VS): 1.3 00:16:35.577 NVMe Specification Version (Identify): 1.3 00:16:35.577 Maximum Queue Entries: 1024 00:16:35.577 Contiguous Queues Required: No 00:16:35.577 Arbitration Mechanisms Supported 00:16:35.577 Weighted Round Robin: Not Supported 00:16:35.577 Vendor Specific: Not Supported 00:16:35.577 Reset Timeout: 7500 ms 00:16:35.577 Doorbell Stride: 4 bytes 00:16:35.577 NVM Subsystem Reset: Not Supported 00:16:35.577 Command Sets Supported 00:16:35.577 NVM Command Set: Supported 00:16:35.577 Boot Partition: Not Supported 00:16:35.577 Memory Page Size Minimum: 4096 bytes 00:16:35.577 Memory Page Size Maximum: 4096 bytes 00:16:35.577 Persistent Memory Region: Not Supported 00:16:35.577 Optional Asynchronous Events Supported 00:16:35.577 Namespace Attribute Notices: Supported 00:16:35.577 Firmware Activation Notices: Not Supported 00:16:35.577 ANA Change Notices: Supported 00:16:35.577 PLE Aggregate Log Change Notices: Not Supported 00:16:35.577 LBA Status Info Alert Notices: Not Supported 00:16:35.577 EGE Aggregate Log Change Notices: Not Supported 00:16:35.577 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.577 Zone Descriptor Change Notices: Not Supported 00:16:35.577 Discovery Log Change Notices: Not Supported 00:16:35.577 Controller Attributes 00:16:35.577 128-bit Host Identifier: Supported 00:16:35.577 Non-Operational Permissive Mode: Not Supported 00:16:35.577 NVM Sets: Not Supported 00:16:35.577 Read Recovery Levels: Not Supported 00:16:35.577 Endurance Groups: Not Supported 00:16:35.577 Predictable Latency Mode: Not Supported 00:16:35.577 Traffic Based Keep ALive: Supported 00:16:35.577 Namespace Granularity: Not Supported 00:16:35.577 SQ Associations: Not Supported 00:16:35.577 UUID List: Not Supported 00:16:35.577 Multi-Domain Subsystem: Not Supported 00:16:35.577 Fixed Capacity Management: Not Supported 00:16:35.577 Variable Capacity Management: Not Supported 00:16:35.577 Delete Endurance Group: Not Supported 00:16:35.577 Delete NVM Set: Not Supported 00:16:35.577 Extended LBA Formats Supported: Not Supported 00:16:35.577 Flexible Data Placement Supported: Not Supported 00:16:35.577 00:16:35.577 Controller Memory Buffer Support 00:16:35.577 ================================ 00:16:35.577 Supported: No 00:16:35.577 00:16:35.577 Persistent Memory Region Support 00:16:35.577 ================================ 00:16:35.577 Supported: No 00:16:35.577 00:16:35.577 Admin Command Set Attributes 00:16:35.577 ============================ 00:16:35.577 Security Send/Receive: Not Supported 00:16:35.577 Format NVM: Not Supported 00:16:35.577 Firmware Activate/Download: Not Supported 00:16:35.577 Namespace Management: Not Supported 00:16:35.577 Device Self-Test: Not Supported 00:16:35.577 Directives: Not Supported 00:16:35.577 NVMe-MI: Not Supported 00:16:35.577 Virtualization Management: Not Supported 00:16:35.577 Doorbell Buffer Config: Not Supported 00:16:35.577 Get LBA Status Capability: Not Supported 00:16:35.577 Command & Feature Lockdown Capability: Not Supported 00:16:35.577 Abort Command Limit: 4 00:16:35.577 Async Event Request Limit: 4 00:16:35.577 Number of Firmware Slots: N/A 00:16:35.577 Firmware Slot 1 Read-Only: N/A 00:16:35.577 Firmware Activation Without Reset: N/A 00:16:35.577 Multiple Update Detection Support: N/A 00:16:35.577 Firmware Update Granularity: No Information Provided 00:16:35.577 Per-Namespace SMART Log: Yes 00:16:35.577 Asymmetric Namespace Access Log Page: Supported 00:16:35.577 ANA Transition Time : 10 sec 00:16:35.577 00:16:35.577 Asymmetric Namespace Access Capabilities 00:16:35.577 ANA Optimized State : Supported 00:16:35.577 ANA Non-Optimized State : Supported 00:16:35.577 ANA Inaccessible State : Supported 00:16:35.577 ANA Persistent Loss State : Supported 00:16:35.577 ANA Change State : Supported 00:16:35.577 ANAGRPID is not changed : No 00:16:35.577 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:35.577 00:16:35.577 ANA Group Identifier Maximum : 128 00:16:35.577 Number of ANA Group Identifiers : 128 00:16:35.577 Max Number of Allowed Namespaces : 1024 00:16:35.577 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:35.577 Command Effects Log Page: Supported 00:16:35.577 Get Log Page Extended Data: Supported 00:16:35.577 Telemetry Log Pages: Not Supported 00:16:35.577 Persistent Event Log Pages: Not Supported 00:16:35.577 Supported Log Pages Log Page: May Support 00:16:35.577 Commands Supported & Effects Log Page: Not Supported 00:16:35.577 Feature Identifiers & Effects Log Page:May Support 00:16:35.577 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.577 Data Area 4 for Telemetry Log: Not Supported 00:16:35.577 Error Log Page Entries Supported: 128 00:16:35.577 Keep Alive: Supported 00:16:35.577 Keep Alive Granularity: 1000 ms 00:16:35.577 00:16:35.577 NVM Command Set Attributes 00:16:35.577 ========================== 00:16:35.577 Submission Queue Entry Size 00:16:35.577 Max: 64 00:16:35.577 Min: 64 00:16:35.577 Completion Queue Entry Size 00:16:35.577 Max: 16 00:16:35.577 Min: 16 00:16:35.577 Number of Namespaces: 1024 00:16:35.577 Compare Command: Not Supported 00:16:35.577 Write Uncorrectable Command: Not Supported 00:16:35.577 Dataset Management Command: Supported 00:16:35.577 Write Zeroes Command: Supported 00:16:35.577 Set Features Save Field: Not Supported 00:16:35.577 Reservations: Not Supported 00:16:35.577 Timestamp: Not Supported 00:16:35.577 Copy: Not Supported 00:16:35.577 Volatile Write Cache: Present 00:16:35.577 Atomic Write Unit (Normal): 1 00:16:35.577 Atomic Write Unit (PFail): 1 00:16:35.577 Atomic Compare & Write Unit: 1 00:16:35.577 Fused Compare & Write: Not Supported 00:16:35.577 Scatter-Gather List 00:16:35.577 SGL Command Set: Supported 00:16:35.577 SGL Keyed: Not Supported 00:16:35.577 SGL Bit Bucket Descriptor: Not Supported 00:16:35.577 SGL Metadata Pointer: Not Supported 00:16:35.577 Oversized SGL: Not Supported 00:16:35.577 SGL Metadata Address: Not Supported 00:16:35.577 SGL Offset: Supported 00:16:35.577 Transport SGL Data Block: Not Supported 00:16:35.577 Replay Protected Memory Block: Not Supported 00:16:35.577 00:16:35.577 Firmware Slot Information 00:16:35.577 ========================= 00:16:35.577 Active slot: 0 00:16:35.577 00:16:35.577 Asymmetric Namespace Access 00:16:35.577 =========================== 00:16:35.577 Change Count : 0 00:16:35.577 Number of ANA Group Descriptors : 1 00:16:35.577 ANA Group Descriptor : 0 00:16:35.577 ANA Group ID : 1 00:16:35.577 Number of NSID Values : 1 00:16:35.577 Change Count : 0 00:16:35.577 ANA State : 1 00:16:35.577 Namespace Identifier : 1 00:16:35.577 00:16:35.577 Commands Supported and Effects 00:16:35.577 ============================== 00:16:35.577 Admin Commands 00:16:35.577 -------------- 00:16:35.577 Get Log Page (02h): Supported 00:16:35.577 Identify (06h): Supported 00:16:35.577 Abort (08h): Supported 00:16:35.577 Set Features (09h): Supported 00:16:35.577 Get Features (0Ah): Supported 00:16:35.577 Asynchronous Event Request (0Ch): Supported 00:16:35.577 Keep Alive (18h): Supported 00:16:35.577 I/O Commands 00:16:35.577 ------------ 00:16:35.577 Flush (00h): Supported 00:16:35.577 Write (01h): Supported LBA-Change 00:16:35.577 Read (02h): Supported 00:16:35.577 Write Zeroes (08h): Supported LBA-Change 00:16:35.577 Dataset Management (09h): Supported 00:16:35.577 00:16:35.577 Error Log 00:16:35.577 ========= 00:16:35.577 Entry: 0 00:16:35.577 Error Count: 0x3 00:16:35.577 Submission Queue Id: 0x0 00:16:35.577 Command Id: 0x5 00:16:35.577 Phase Bit: 0 00:16:35.577 Status Code: 0x2 00:16:35.577 Status Code Type: 0x0 00:16:35.578 Do Not Retry: 1 00:16:35.578 Error Location: 0x28 00:16:35.578 LBA: 0x0 00:16:35.578 Namespace: 0x0 00:16:35.578 Vendor Log Page: 0x0 00:16:35.578 ----------- 00:16:35.578 Entry: 1 00:16:35.578 Error Count: 0x2 00:16:35.578 Submission Queue Id: 0x0 00:16:35.578 Command Id: 0x5 00:16:35.578 Phase Bit: 0 00:16:35.578 Status Code: 0x2 00:16:35.578 Status Code Type: 0x0 00:16:35.578 Do Not Retry: 1 00:16:35.578 Error Location: 0x28 00:16:35.578 LBA: 0x0 00:16:35.578 Namespace: 0x0 00:16:35.578 Vendor Log Page: 0x0 00:16:35.578 ----------- 00:16:35.578 Entry: 2 00:16:35.578 Error Count: 0x1 00:16:35.578 Submission Queue Id: 0x0 00:16:35.578 Command Id: 0x4 00:16:35.578 Phase Bit: 0 00:16:35.578 Status Code: 0x2 00:16:35.578 Status Code Type: 0x0 00:16:35.578 Do Not Retry: 1 00:16:35.578 Error Location: 0x28 00:16:35.578 LBA: 0x0 00:16:35.578 Namespace: 0x0 00:16:35.578 Vendor Log Page: 0x0 00:16:35.578 00:16:35.578 Number of Queues 00:16:35.578 ================ 00:16:35.578 Number of I/O Submission Queues: 128 00:16:35.578 Number of I/O Completion Queues: 128 00:16:35.578 00:16:35.578 ZNS Specific Controller Data 00:16:35.578 ============================ 00:16:35.578 Zone Append Size Limit: 0 00:16:35.578 00:16:35.578 00:16:35.578 Active Namespaces 00:16:35.578 ================= 00:16:35.578 get_feature(0x05) failed 00:16:35.578 Namespace ID:1 00:16:35.578 Command Set Identifier: NVM (00h) 00:16:35.578 Deallocate: Supported 00:16:35.578 Deallocated/Unwritten Error: Not Supported 00:16:35.578 Deallocated Read Value: Unknown 00:16:35.578 Deallocate in Write Zeroes: Not Supported 00:16:35.578 Deallocated Guard Field: 0xFFFF 00:16:35.578 Flush: Supported 00:16:35.578 Reservation: Not Supported 00:16:35.578 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.578 Size (in LBAs): 1310720 (5GiB) 00:16:35.578 Capacity (in LBAs): 1310720 (5GiB) 00:16:35.578 Utilization (in LBAs): 1310720 (5GiB) 00:16:35.578 UUID: 06e55246-3359-4e40-8322-15aab0191e4e 00:16:35.578 Thin Provisioning: Not Supported 00:16:35.578 Per-NS Atomic Units: Yes 00:16:35.578 Atomic Boundary Size (Normal): 0 00:16:35.578 Atomic Boundary Size (PFail): 0 00:16:35.578 Atomic Boundary Offset: 0 00:16:35.578 NGUID/EUI64 Never Reused: No 00:16:35.578 ANA group ID: 1 00:16:35.578 Namespace Write Protected: No 00:16:35.578 Number of LBA Formats: 1 00:16:35.578 Current LBA Format: LBA Format #00 00:16:35.578 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:35.578 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.578 rmmod nvme_tcp 00:16:35.578 rmmod nvme_fabrics 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.578 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:35.836 07:42:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:36.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.402 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:36.661 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:36.661 ************************************ 00:16:36.661 END TEST nvmf_identify_kernel_target 00:16:36.661 ************************************ 00:16:36.661 00:16:36.661 real 0m2.807s 00:16:36.661 user 0m0.935s 00:16:36.661 sys 0m1.382s 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.661 ************************************ 00:16:36.661 START TEST nvmf_auth_host 00:16:36.661 ************************************ 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:36.661 * Looking for test storage... 00:16:36.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.661 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:36.919 Cannot find device "nvmf_tgt_br" 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.919 Cannot find device "nvmf_tgt_br2" 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:36.919 Cannot find device "nvmf_tgt_br" 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:36.919 Cannot find device "nvmf_tgt_br2" 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.919 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:37.177 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:37.177 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:37.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:37.178 00:16:37.178 --- 10.0.0.2 ping statistics --- 00:16:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.178 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:37.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:37.178 00:16:37.178 --- 10.0.0.3 ping statistics --- 00:16:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.178 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:37.178 00:16:37.178 --- 10.0.0.1 ping statistics --- 00:16:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.178 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77992 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77992 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77992 ']' 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.178 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.113 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.113 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:38.113 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.113 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.113 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eaefe87af03226ec89b6f1917e8cad6c 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fhc 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eaefe87af03226ec89b6f1917e8cad6c 0 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eaefe87af03226ec89b6f1917e8cad6c 0 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.371 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eaefe87af03226ec89b6f1917e8cad6c 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fhc 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fhc 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fhc 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bd3c4ab6a0ad76cc2900f64af6e22e23958a81035fe22ab2e8cc4e842330bd36 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rp1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bd3c4ab6a0ad76cc2900f64af6e22e23958a81035fe22ab2e8cc4e842330bd36 3 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bd3c4ab6a0ad76cc2900f64af6e22e23958a81035fe22ab2e8cc4e842330bd36 3 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bd3c4ab6a0ad76cc2900f64af6e22e23958a81035fe22ab2e8cc4e842330bd36 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rp1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rp1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rp1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4e4d000185f371ec925e4aeec972511fd1d70575d72466e2 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uWZ 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4e4d000185f371ec925e4aeec972511fd1d70575d72466e2 0 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4e4d000185f371ec925e4aeec972511fd1d70575d72466e2 0 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4e4d000185f371ec925e4aeec972511fd1d70575d72466e2 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uWZ 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uWZ 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.uWZ 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5cbaea130f36130bc010df15f98c71e40add735a8f5ddb21 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Bxs 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5cbaea130f36130bc010df15f98c71e40add735a8f5ddb21 2 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5cbaea130f36130bc010df15f98c71e40add735a8f5ddb21 2 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5cbaea130f36130bc010df15f98c71e40add735a8f5ddb21 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:38.372 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.631 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Bxs 00:16:38.631 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Bxs 00:16:38.631 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Bxs 00:16:38.632 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ec0a930e7e7ecf3480a6076ccd1d239 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1qP 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ec0a930e7e7ecf3480a6076ccd1d239 1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ec0a930e7e7ecf3480a6076ccd1d239 1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ec0a930e7e7ecf3480a6076ccd1d239 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1qP 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1qP 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1qP 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=06ea6047bc630bf3029f45890222ccc9 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qCL 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 06ea6047bc630bf3029f45890222ccc9 1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 06ea6047bc630bf3029f45890222ccc9 1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=06ea6047bc630bf3029f45890222ccc9 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qCL 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qCL 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qCL 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35c3d7762380cab8c2a746ebe6aa88e8ed61d256b4ef46cd 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.BQY 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35c3d7762380cab8c2a746ebe6aa88e8ed61d256b4ef46cd 2 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35c3d7762380cab8c2a746ebe6aa88e8ed61d256b4ef46cd 2 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35c3d7762380cab8c2a746ebe6aa88e8ed61d256b4ef46cd 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.BQY 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.BQY 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.BQY 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87504b858653dc1c90b9fd20c9379185 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5kL 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87504b858653dc1c90b9fd20c9379185 0 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87504b858653dc1c90b9fd20c9379185 0 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87504b858653dc1c90b9fd20c9379185 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:38.632 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5kL 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5kL 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5kL 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc82cf6e94ecd656b993d53ff98909c4f85ba39a644dd73940fe5b05024108e7 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FKp 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc82cf6e94ecd656b993d53ff98909c4f85ba39a644dd73940fe5b05024108e7 3 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc82cf6e94ecd656b993d53ff98909c4f85ba39a644dd73940fe5b05024108e7 3 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc82cf6e94ecd656b993d53ff98909c4f85ba39a644dd73940fe5b05024108e7 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FKp 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FKp 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.FKp 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77992 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77992 ']' 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.891 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fhc 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rp1 ]] 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rp1 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.150 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.uWZ 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Bxs ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bxs 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1qP 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qCL ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qCL 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.BQY 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5kL ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5kL 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.FKp 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:39.151 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:39.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.718 Waiting for block devices as requested 00:16:39.718 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:39.718 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:40.284 No valid GPT data, bailing 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:40.284 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:40.543 No valid GPT data, bailing 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:40.543 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:40.543 No valid GPT data, bailing 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:40.543 No valid GPT data, bailing 00:16:40.543 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:40.544 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a --hostid=437e2608-a818-4ddb-8068-388d756b599a -a 10.0.0.1 -t tcp -s 4420 00:16:40.802 00:16:40.802 Discovery Log Number of Records 2, Generation counter 2 00:16:40.802 =====Discovery Log Entry 0====== 00:16:40.802 trtype: tcp 00:16:40.802 adrfam: ipv4 00:16:40.802 subtype: current discovery subsystem 00:16:40.802 treq: not specified, sq flow control disable supported 00:16:40.802 portid: 1 00:16:40.802 trsvcid: 4420 00:16:40.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:40.802 traddr: 10.0.0.1 00:16:40.802 eflags: none 00:16:40.802 sectype: none 00:16:40.802 =====Discovery Log Entry 1====== 00:16:40.802 trtype: tcp 00:16:40.802 adrfam: ipv4 00:16:40.802 subtype: nvme subsystem 00:16:40.802 treq: not specified, sq flow control disable supported 00:16:40.802 portid: 1 00:16:40.802 trsvcid: 4420 00:16:40.802 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:40.802 traddr: 10.0.0.1 00:16:40.802 eflags: none 00:16:40.802 sectype: none 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.802 nvme0n1 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.802 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.803 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.803 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:41.080 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 nvme0n1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.081 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.340 nvme0n1 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.340 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 nvme0n1 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.599 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 nvme0n1 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.599 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.600 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.858 nvme0n1 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.858 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.116 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.376 nvme0n1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.376 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.635 nvme0n1 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.635 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.636 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.894 nvme0n1 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.894 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.895 nvme0n1 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.895 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 nvme0n1 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.153 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.154 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.721 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 nvme0n1 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.979 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.980 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.238 nvme0n1 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.238 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.496 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.497 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.497 nvme0n1 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.497 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:44.755 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.756 nvme0n1 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.756 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.015 nvme0n1 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.015 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.274 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.203 nvme0n1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.203 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.462 nvme0n1 00:16:47.462 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.462 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.462 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.462 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.462 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.720 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.721 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.977 nvme0n1 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.977 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.542 nvme0n1 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.542 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.543 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.802 nvme0n1 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.802 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.431 nvme0n1 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.431 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.432 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.997 nvme0n1 00:16:49.997 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.997 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.997 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.997 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.997 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.997 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.257 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.258 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 nvme0n1 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.824 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 nvme0n1 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.392 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.973 nvme0n1 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.973 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.974 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.232 nvme0n1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.232 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.490 nvme0n1 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.490 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.490 nvme0n1 00:16:52.490 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.490 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.491 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.749 nvme0n1 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.749 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.750 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.750 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.750 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.008 nvme0n1 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.008 nvme0n1 00:16:53.008 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.009 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.009 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.009 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.009 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.267 nvme0n1 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.267 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.526 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.526 nvme0n1 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.526 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.527 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 nvme0n1 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.786 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.045 nvme0n1 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.045 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.046 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.046 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.046 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.046 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.046 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 nvme0n1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.304 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.563 nvme0n1 00:16:54.563 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.563 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.563 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.563 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.563 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.563 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.563 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 nvme0n1 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.822 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.081 nvme0n1 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.081 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.340 nvme0n1 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.340 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.907 nvme0n1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.907 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.166 nvme0n1 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.166 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.425 nvme0n1 00:16:56.425 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:56.683 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.684 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.943 nvme0n1 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.943 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.510 nvme0n1 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.510 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.511 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.511 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.076 nvme0n1 00:16:58.076 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.077 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.643 nvme0n1 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.643 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.209 nvme0n1 00:16:59.209 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.209 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.209 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.209 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.209 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.468 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.035 nvme0n1 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.035 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.036 07:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.603 nvme0n1 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.603 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.604 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.863 nvme0n1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.863 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 nvme0n1 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 nvme0n1 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.122 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 nvme0n1 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.381 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.382 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.382 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.382 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.382 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.640 nvme0n1 00:17:01.640 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.640 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.640 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.640 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.640 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.640 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.641 nvme0n1 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.641 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.899 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.899 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.899 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.899 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.899 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.900 nvme0n1 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.900 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.159 nvme0n1 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.159 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 nvme0n1 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.420 07:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.679 nvme0n1 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.679 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.938 nvme0n1 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.938 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.939 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.198 nvme0n1 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.198 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.457 nvme0n1 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.457 07:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.716 nvme0n1 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.716 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.975 nvme0n1 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.975 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.234 nvme0n1 00:17:04.234 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.234 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.234 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.234 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.234 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.492 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.493 07:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.751 nvme0n1 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.751 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.752 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.319 nvme0n1 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.319 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.576 nvme0n1 00:17:05.576 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.577 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.143 nvme0n1 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZmU4N2FmMDMyMjZlYzg5YjZmMTkxN2U4Y2FkNmNmx7WK: 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: ]] 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQzYzRhYjZhMGFkNzZjYzI5MDBmNjRhZjZlMjJlMjM5NThhODEwMzVmZTIyYWIyZThjYzRlODQyMzMwYmQzNlYleao=: 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.143 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.144 07:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.710 nvme0n1 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.710 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.711 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 nvme0n1 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVjMGE5MzBlN2U3ZWNmMzQ4MGE2MDc2Y2NkMWQyMzmhHZj6: 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDZlYTYwNDdiYzYzMGJmMzAyOWY0NTg5MDIyMmNjYzmvxGfU: 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.278 07:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 nvme0n1 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.845 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVjM2Q3NzYyMzgwY2FiOGMyYTc0NmViZTZhYTg4ZThlZDYxZDI1NmI0ZWY0NmNkLMw9tw==: 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: ]] 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc1MDRiODU4NjUzZGMxYzkwYjlmZDIwYzkzNzkxODWNM1Yo: 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.113 07:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.679 nvme0n1 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM4MmNmNmU5NGVjZDY1NmI5OTNkNTNmZjk4OTA5YzRmODViYTM5YTY0NGRkNzM5NDBmZTViMDUwMjQxMDhlNzucvUQ=: 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 nvme0n1 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU0ZDAwMDE4NWYzNzFlYzkyNWU0YWVlYzk3MjUxMWZkMWQ3MDU3NWQ3MjQ2NmUyzheJlw==: 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWNiYWVhMTMwZjM2MTMwYmMwMTBkZjE1Zjk4YzcxZTQwYWRkNzM1YThmNWRkYjIxHhF9rw==: 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 request: 00:17:09.270 { 00:17:09.270 "name": "nvme0", 00:17:09.270 "trtype": "tcp", 00:17:09.270 "traddr": "10.0.0.1", 00:17:09.270 "adrfam": "ipv4", 00:17:09.270 "trsvcid": "4420", 00:17:09.270 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.270 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.270 "prchk_reftag": false, 00:17:09.270 "prchk_guard": false, 00:17:09.270 "hdgst": false, 00:17:09.270 "ddgst": false, 00:17:09.270 "method": "bdev_nvme_attach_controller", 00:17:09.270 "req_id": 1 00:17:09.270 } 00:17:09.270 Got JSON-RPC error response 00:17:09.270 response: 00:17:09.270 { 00:17:09.270 "code": -5, 00:17:09.270 "message": "Input/output error" 00:17:09.270 } 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.270 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.271 request: 00:17:09.271 { 00:17:09.271 "name": "nvme0", 00:17:09.271 "trtype": "tcp", 00:17:09.271 "traddr": "10.0.0.1", 00:17:09.271 "adrfam": "ipv4", 00:17:09.271 "trsvcid": "4420", 00:17:09.271 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.271 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.271 "prchk_reftag": false, 00:17:09.271 "prchk_guard": false, 00:17:09.271 "hdgst": false, 00:17:09.271 "ddgst": false, 00:17:09.271 "dhchap_key": "key2", 00:17:09.271 "method": "bdev_nvme_attach_controller", 00:17:09.271 "req_id": 1 00:17:09.271 } 00:17:09.271 Got JSON-RPC error response 00:17:09.271 response: 00:17:09.271 { 00:17:09.271 "code": -5, 00:17:09.271 "message": "Input/output error" 00:17:09.271 } 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:09.271 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.530 request: 00:17:09.530 { 00:17:09.530 "name": "nvme0", 00:17:09.530 "trtype": "tcp", 00:17:09.530 "traddr": "10.0.0.1", 00:17:09.530 "adrfam": "ipv4", 00:17:09.530 "trsvcid": "4420", 00:17:09.530 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.530 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.530 "prchk_reftag": false, 00:17:09.530 "prchk_guard": false, 00:17:09.530 "hdgst": false, 00:17:09.530 "ddgst": false, 00:17:09.530 "dhchap_key": "key1", 00:17:09.530 "dhchap_ctrlr_key": "ckey2", 00:17:09.530 "method": "bdev_nvme_attach_controller", 00:17:09.530 "req_id": 1 00:17:09.530 } 00:17:09.530 Got JSON-RPC error response 00:17:09.530 response: 00:17:09.530 { 00:17:09.530 "code": -5, 00:17:09.530 "message": "Input/output error" 00:17:09.530 } 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.530 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.530 rmmod nvme_tcp 00:17:09.530 rmmod nvme_fabrics 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77992 ']' 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77992 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77992 ']' 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77992 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77992 00:17:09.530 killing process with pid 77992 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77992' 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77992 00:17:09.530 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77992 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:09.789 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:10.048 07:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:10.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:10.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:10.614 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:10.873 07:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fhc /tmp/spdk.key-null.uWZ /tmp/spdk.key-sha256.1qP /tmp/spdk.key-sha384.BQY /tmp/spdk.key-sha512.FKp /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:10.873 07:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:11.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.131 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:11.131 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:11.131 00:17:11.131 real 0m34.510s 00:17:11.131 user 0m31.698s 00:17:11.131 sys 0m3.684s 00:17:11.131 ************************************ 00:17:11.131 END TEST nvmf_auth_host 00:17:11.131 ************************************ 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.131 ************************************ 00:17:11.131 START TEST nvmf_digest 00:17:11.131 ************************************ 00:17:11.131 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:11.390 * Looking for test storage... 00:17:11.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.390 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:11.391 Cannot find device "nvmf_tgt_br" 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.391 Cannot find device "nvmf_tgt_br2" 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:11.391 Cannot find device "nvmf_tgt_br" 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:11.391 Cannot find device "nvmf_tgt_br2" 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.391 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.649 07:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.649 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:17:11.650 00:17:11.650 --- 10.0.0.2 ping statistics --- 00:17:11.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.650 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:11.650 00:17:11.650 --- 10.0.0.3 ping statistics --- 00:17:11.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.650 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:11.650 00:17:11.650 --- 10.0.0.1 ping statistics --- 00:17:11.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.650 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:11.650 ************************************ 00:17:11.650 START TEST nvmf_digest_clean 00:17:11.650 ************************************ 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79558 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79558 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79558 ']' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.650 07:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.908 [2024-07-26 07:42:37.293434] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:11.908 [2024-07-26 07:42:37.293572] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.908 [2024-07-26 07:42:37.436858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.167 [2024-07-26 07:42:37.567170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.167 [2024-07-26 07:42:37.567237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.167 [2024-07-26 07:42:37.567256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.167 [2024-07-26 07:42:37.567267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.167 [2024-07-26 07:42:37.567277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.167 [2024-07-26 07:42:37.567313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.734 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.992 [2024-07-26 07:42:38.377338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:12.992 null0 00:17:12.992 [2024-07-26 07:42:38.436370] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.992 [2024-07-26 07:42:38.460564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79590 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79590 /var/tmp/bperf.sock 00:17:12.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79590 ']' 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.992 07:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.992 [2024-07-26 07:42:38.523157] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:12.992 [2024-07-26 07:42:38.523444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79590 ] 00:17:13.250 [2024-07-26 07:42:38.665318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.250 [2024-07-26 07:42:38.800912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.816 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.816 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:13.816 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:13.816 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:13.816 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:14.074 [2024-07-26 07:42:39.670203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:14.332 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.332 07:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.590 nvme0n1 00:17:14.590 07:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:14.590 07:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:14.590 Running I/O for 2 seconds... 00:17:17.117 00:17:17.118 Latency(us) 00:17:17.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.118 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:17.118 nvme0n1 : 2.01 16320.84 63.75 0.00 0.00 7836.90 7298.33 21805.61 00:17:17.118 =================================================================================================================== 00:17:17.118 Total : 16320.84 63.75 0.00 0.00 7836.90 7298.33 21805.61 00:17:17.118 0 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:17.118 | select(.opcode=="crc32c") 00:17:17.118 | "\(.module_name) \(.executed)"' 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79590 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79590 ']' 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79590 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79590 00:17:17.118 killing process with pid 79590 00:17:17.118 Received shutdown signal, test time was about 2.000000 seconds 00:17:17.118 00:17:17.118 Latency(us) 00:17:17.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.118 =================================================================================================================== 00:17:17.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79590' 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79590 00:17:17.118 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79590 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79651 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79651 /var/tmp/bperf.sock 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79651 ']' 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:17.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.376 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:17.376 [2024-07-26 07:42:42.777697] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:17.376 [2024-07-26 07:42:42.777945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:17:17.376 Zero copy mechanism will not be used. 00:17:17.376 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79651 ] 00:17:17.376 [2024-07-26 07:42:42.909714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.634 [2024-07-26 07:42:43.019432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.200 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.200 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:18.200 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:18.200 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:18.200 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:18.766 [2024-07-26 07:42:44.084233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:18.766 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.766 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.024 nvme0n1 00:17:19.024 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:19.024 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:19.024 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:19.024 Zero copy mechanism will not be used. 00:17:19.024 Running I/O for 2 seconds... 00:17:20.926 00:17:20.926 Latency(us) 00:17:20.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.926 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:20.926 nvme0n1 : 2.00 7998.50 999.81 0.00 0.00 1997.15 1779.90 10187.87 00:17:20.926 =================================================================================================================== 00:17:20.926 Total : 7998.50 999.81 0.00 0.00 1997.15 1779.90 10187.87 00:17:20.926 0 00:17:21.185 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:21.185 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:21.185 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:21.185 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:21.185 | select(.opcode=="crc32c") 00:17:21.185 | "\(.module_name) \(.executed)"' 00:17:21.185 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79651 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79651 ']' 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79651 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:21.443 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.444 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79651 00:17:21.444 killing process with pid 79651 00:17:21.444 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:21.444 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:21.444 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79651' 00:17:21.444 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79651 00:17:21.444 Received shutdown signal, test time was about 2.000000 seconds 00:17:21.444 00:17:21.444 Latency(us) 00:17:21.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.444 =================================================================================================================== 00:17:21.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.444 07:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79651 00:17:21.702 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79716 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79716 /var/tmp/bperf.sock 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79716 ']' 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:21.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.703 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.703 [2024-07-26 07:42:47.195408] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:21.703 [2024-07-26 07:42:47.195664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79716 ] 00:17:21.961 [2024-07-26 07:42:47.330126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.961 [2024-07-26 07:42:47.432148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:22.896 [2024-07-26 07:42:48.426138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:22.896 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.464 nvme0n1 00:17:23.464 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:23.464 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:23.464 Running I/O for 2 seconds... 00:17:25.367 00:17:25.367 Latency(us) 00:17:25.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.367 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.367 nvme0n1 : 2.00 17370.17 67.85 0.00 0.00 7362.70 6553.60 14834.97 00:17:25.367 =================================================================================================================== 00:17:25.367 Total : 17370.17 67.85 0.00 0.00 7362.70 6553.60 14834.97 00:17:25.367 0 00:17:25.367 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:25.367 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:25.367 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:25.367 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:25.367 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:25.367 | select(.opcode=="crc32c") 00:17:25.367 | "\(.module_name) \(.executed)"' 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79716 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79716 ']' 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79716 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.625 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79716 00:17:25.884 killing process with pid 79716 00:17:25.884 Received shutdown signal, test time was about 2.000000 seconds 00:17:25.884 00:17:25.884 Latency(us) 00:17:25.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.884 =================================================================================================================== 00:17:25.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.884 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.884 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.884 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79716' 00:17:25.884 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79716 00:17:25.884 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79716 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79773 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79773 /var/tmp/bperf.sock 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79773 ']' 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:26.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.143 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 [2024-07-26 07:42:51.579610] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:26.143 [2024-07-26 07:42:51.579866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.143 Zero copy mechanism will not be used. 00:17:26.143 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79773 ] 00:17:26.143 [2024-07-26 07:42:51.709867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.401 [2024-07-26 07:42:51.824360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.968 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.968 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:26.968 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:26.968 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:26.968 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:27.227 [2024-07-26 07:42:52.804804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:27.486 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.486 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.745 nvme0n1 00:17:27.745 07:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:27.745 07:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:27.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:27.745 Zero copy mechanism will not be used. 00:17:27.745 Running I/O for 2 seconds... 00:17:30.274 00:17:30.274 Latency(us) 00:17:30.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.274 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:30.274 nvme0n1 : 2.00 6747.57 843.45 0.00 0.00 2365.47 1787.35 4230.05 00:17:30.274 =================================================================================================================== 00:17:30.274 Total : 6747.57 843.45 0.00 0.00 2365.47 1787.35 4230.05 00:17:30.274 0 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:30.274 | select(.opcode=="crc32c") 00:17:30.274 | "\(.module_name) \(.executed)"' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79773 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79773 ']' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79773 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79773 00:17:30.274 killing process with pid 79773 00:17:30.274 Received shutdown signal, test time was about 2.000000 seconds 00:17:30.274 00:17:30.274 Latency(us) 00:17:30.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.274 =================================================================================================================== 00:17:30.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79773' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79773 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79773 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79558 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79558 ']' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79558 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.274 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79558 00:17:30.533 killing process with pid 79558 00:17:30.533 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.533 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.533 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79558' 00:17:30.533 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79558 00:17:30.533 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79558 00:17:30.792 00:17:30.792 real 0m18.959s 00:17:30.792 user 0m36.181s 00:17:30.792 sys 0m4.923s 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.792 ************************************ 00:17:30.792 END TEST nvmf_digest_clean 00:17:30.792 ************************************ 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:30.792 ************************************ 00:17:30.792 START TEST nvmf_digest_error 00:17:30.792 ************************************ 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79862 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79862 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79862 ']' 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.792 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.792 [2024-07-26 07:42:56.306802] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:30.792 [2024-07-26 07:42:56.306886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.051 [2024-07-26 07:42:56.446395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.051 [2024-07-26 07:42:56.558033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.051 [2024-07-26 07:42:56.558085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.051 [2024-07-26 07:42:56.558112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.051 [2024-07-26 07:42:56.558120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.051 [2024-07-26 07:42:56.558128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.051 [2024-07-26 07:42:56.558154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.986 [2024-07-26 07:42:57.346750] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.986 [2024-07-26 07:42:57.428068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:31.986 null0 00:17:31.986 [2024-07-26 07:42:57.486415] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.986 [2024-07-26 07:42:57.510587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79894 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79894 /var/tmp/bperf.sock 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79894 ']' 00:17:31.986 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:31.987 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.987 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:31.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:31.987 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.987 07:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.987 [2024-07-26 07:42:57.572987] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:31.987 [2024-07-26 07:42:57.573269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79894 ] 00:17:32.245 [2024-07-26 07:42:57.713013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.503 [2024-07-26 07:42:57.852183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.503 [2024-07-26 07:42:57.925406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:33.070 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.070 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:33.070 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:33.070 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:33.328 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:33.328 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.328 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.328 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.328 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.328 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.587 nvme0n1 00:17:33.587 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:33.587 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.587 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.587 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.587 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:33.587 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.587 Running I/O for 2 seconds... 00:17:33.844 [2024-07-26 07:42:59.213401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.213459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.213491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.229443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.229494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.229526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.245233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.245272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.245286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.261092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.261129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.261158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.277326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.277366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.277380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.293159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.293197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.293250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.309119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.309158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.309187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.324948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.324987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.325016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.340900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.340967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.356930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.356967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.356996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.372954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.372990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.388788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.388824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.404656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.404691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.404720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.420645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.420681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.420710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.844 [2024-07-26 07:42:59.436476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:33.844 [2024-07-26 07:42:59.436542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.844 [2024-07-26 07:42:59.436571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.453144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.453182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.470200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.470237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.470266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.486267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.486304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.486332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.502211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.502248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.502277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.518220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.518258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.518287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.534286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.534324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.534352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.550323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.550360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.550388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.566425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.566462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.566521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.582335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.582371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.582401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.598201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.598237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.598266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.614206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.102 [2024-07-26 07:42:59.614242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.102 [2024-07-26 07:42:59.614271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.102 [2024-07-26 07:42:59.630184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.103 [2024-07-26 07:42:59.630220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.103 [2024-07-26 07:42:59.630249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.103 [2024-07-26 07:42:59.646109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.103 [2024-07-26 07:42:59.646145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.103 [2024-07-26 07:42:59.646174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.103 [2024-07-26 07:42:59.662078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.103 [2024-07-26 07:42:59.662114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.103 [2024-07-26 07:42:59.662143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.103 [2024-07-26 07:42:59.677940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.103 [2024-07-26 07:42:59.677976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.103 [2024-07-26 07:42:59.678004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.103 [2024-07-26 07:42:59.693994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.103 [2024-07-26 07:42:59.694030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.103 [2024-07-26 07:42:59.694058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.710181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.710217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.710245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.725789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.725824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.725852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.741045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.741080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.741108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.756520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.756555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.756583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.771893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.771927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.771956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.787336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.787371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.787399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.802736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.802771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.802799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.818142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.818177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.818204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.833670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.833706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.833734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.848989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.849026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.849053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.864425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.864461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.864534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.879774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.879808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.879837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.895208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.895242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.895271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.910796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.910845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.926140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.926174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.926202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.941694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.941731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.941760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.361 [2024-07-26 07:42:59.957260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.361 [2024-07-26 07:42:59.957297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.361 [2024-07-26 07:42:59.957310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:42:59.973432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:42:59.973482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:42:59.973514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:42:59.988721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:42:59.988756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:42:59.988785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.004037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.004071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.004099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.019486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.019520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.019548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.035232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.035272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.035286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.050818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.050853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.050881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.066347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.066383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.066412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.082015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.082050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.082078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.097568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.097604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.097632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.113429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.113476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.113492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.130113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.130150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.130179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.147011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.147047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.147076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.163666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.163704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.163718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.180883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.180918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.180946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.620 [2024-07-26 07:43:00.197733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.620 [2024-07-26 07:43:00.197769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.620 [2024-07-26 07:43:00.197798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.220542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.220577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.220606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.236748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.236783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.236812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.252850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.252903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.252932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.269174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.269237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.269252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.285774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.285811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.285839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.301929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.301965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.301994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.318020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.318057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.318085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.334033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.334068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.334097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.349990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.350026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.350054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.365822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.365859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.878 [2024-07-26 07:43:00.365888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.878 [2024-07-26 07:43:00.381234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.878 [2024-07-26 07:43:00.381272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.381286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.879 [2024-07-26 07:43:00.396628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.879 [2024-07-26 07:43:00.396663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.396692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.879 [2024-07-26 07:43:00.411995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.879 [2024-07-26 07:43:00.412030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.412058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.879 [2024-07-26 07:43:00.427564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.879 [2024-07-26 07:43:00.427599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.427627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.879 [2024-07-26 07:43:00.442947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.879 [2024-07-26 07:43:00.442982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.443010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.879 [2024-07-26 07:43:00.458470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.879 [2024-07-26 07:43:00.458530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.879 [2024-07-26 07:43:00.474798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:34.879 [2024-07-26 07:43:00.474832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.879 [2024-07-26 07:43:00.474845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.491554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.491590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.491619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.506998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.507033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.507061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.522662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.522696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.522724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.538119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.538154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.538183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.553671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.553721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.553750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.569156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.569193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.569251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.584695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.584730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.584757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.599970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.600005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.600033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.615416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.615451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.615479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.630721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.630755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-07-26 07:43:00.630783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.137 [2024-07-26 07:43:00.646112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.137 [2024-07-26 07:43:00.646147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-07-26 07:43:00.646175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.138 [2024-07-26 07:43:00.661588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.138 [2024-07-26 07:43:00.661624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-07-26 07:43:00.661637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.138 [2024-07-26 07:43:00.676858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.138 [2024-07-26 07:43:00.676892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-07-26 07:43:00.676920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.138 [2024-07-26 07:43:00.692736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.138 [2024-07-26 07:43:00.692771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-07-26 07:43:00.692799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.138 [2024-07-26 07:43:00.708525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.138 [2024-07-26 07:43:00.708561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-07-26 07:43:00.708590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.138 [2024-07-26 07:43:00.725337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.138 [2024-07-26 07:43:00.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-07-26 07:43:00.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.742118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.742161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.742192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.758099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.758135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.758163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.774216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.774252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.774282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.790183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.790219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.790248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.806069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.806105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.806133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.822162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.822200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.822214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.838032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.838067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.838096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.853907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.853942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.853971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.869824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.869859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.869888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.885660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.885710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.885739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.901547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.901583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.901596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.917355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.917393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.917407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.933117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.933153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.933181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.949387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.949427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.949440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.966259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.966297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.966326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.982223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.982261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.982290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 07:43:00.998659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.398 [2024-07-26 07:43:00.998697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.398 [2024-07-26 07:43:00.998712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.678 [2024-07-26 07:43:01.015396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.678 [2024-07-26 07:43:01.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.015463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.031623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.031661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.031690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.047493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.047528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.047557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.063351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.063387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.063416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.079334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.079369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.079397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.095290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.095326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.095355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.111238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.111273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.111302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.127248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.127283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.127311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.143197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.143232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.143261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.159139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.159175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.159203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 [2024-07-26 07:43:01.175014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x100d4f0) 00:17:35.679 [2024-07-26 07:43:01.175049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.679 [2024-07-26 07:43:01.175077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.679 00:17:35.679 Latency(us) 00:17:35.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.679 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:35.679 nvme0n1 : 2.00 15841.36 61.88 0.00 0.00 8074.76 7387.69 31218.97 00:17:35.679 =================================================================================================================== 00:17:35.679 Total : 15841.36 61.88 0.00 0.00 8074.76 7387.69 31218.97 00:17:35.679 0 00:17:35.679 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:35.679 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:35.679 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:35.679 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:35.679 | .driver_specific 00:17:35.679 | .nvme_error 00:17:35.679 | .status_code 00:17:35.679 | .command_transient_transport_error' 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79894 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79894 ']' 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79894 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79894 00:17:35.958 killing process with pid 79894 00:17:35.958 Received shutdown signal, test time was about 2.000000 seconds 00:17:35.958 00:17:35.958 Latency(us) 00:17:35.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.958 =================================================================================================================== 00:17:35.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79894' 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79894 00:17:35.958 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79894 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79954 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79954 /var/tmp/bperf.sock 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79954 ']' 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:36.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.224 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.224 [2024-07-26 07:43:01.814005] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:36.224 [2024-07-26 07:43:01.814260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79954 ] 00:17:36.224 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:36.224 Zero copy mechanism will not be used. 00:17:36.483 [2024-07-26 07:43:01.945845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.483 [2024-07-26 07:43:02.046567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.741 [2024-07-26 07:43:02.119261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:37.308 07:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.308 07:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:37.308 07:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.308 07:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.566 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:37.566 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.566 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.566 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.566 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.566 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.825 nvme0n1 00:17:37.825 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:37.825 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.825 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.825 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.825 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:37.825 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:38.084 Zero copy mechanism will not be used. 00:17:38.084 Running I/O for 2 seconds... 00:17:38.084 [2024-07-26 07:43:03.490161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.490232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.490252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.494606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.494646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.494660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.498955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.498994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.499022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.503080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.503119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.503148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.507343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.507381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.507411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.511661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.511699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.511728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.515812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.515851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.515880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.519983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.520021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.520050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.524284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.524322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.524351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.528929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.528968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.528997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.533420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.533461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.533491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.537883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.537922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.537951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.542640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.542679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.542708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.546939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.546977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.547006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.551202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.551240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.551269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.555560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.555598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.555627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.559687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.559726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.559754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.563997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.084 [2024-07-26 07:43:03.564035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.084 [2024-07-26 07:43:03.564064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.084 [2024-07-26 07:43:03.568174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.568212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.568241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.572444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.572528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.572544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.576669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.576707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.576735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.580793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.580830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.580858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.584996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.585033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.585061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.589236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.589278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.589292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.593683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.593720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.593749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.597912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.597948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.597977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.602194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.602231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.602261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.606545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.606581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.606609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.610650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.610687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.610716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.614810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.614847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.614876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.619057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.619095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.619123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.623400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.623437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.623466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.627589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.627626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.627655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.631833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.631886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.631914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.636108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.636146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.636174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.640298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.640336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.640364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.644516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.644552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.644580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.648680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.648717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.648745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.652749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.652786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.652814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.656952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.656989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.657017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.661194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.661257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.661287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.665465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.665513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.665527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.669749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.669786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.669815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.673876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.673913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.085 [2024-07-26 07:43:03.673941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.085 [2024-07-26 07:43:03.678069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.085 [2024-07-26 07:43:03.678106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.086 [2024-07-26 07:43:03.678134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.086 [2024-07-26 07:43:03.682366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.086 [2024-07-26 07:43:03.682404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.086 [2024-07-26 07:43:03.682432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.686876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.686913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.686941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.691254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.691292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.691320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.695555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.695593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.695621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.699691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.699729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.699757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.703960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.703998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.704027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.708108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.708145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.708173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.712271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.712308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.712337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.716398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.716436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.716464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.720644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.720681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.720710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.724805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.724842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.724870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.728947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.728985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.729013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.733152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.733190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.733242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.737408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.737449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.741557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.741594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.741608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.745682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.745718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.749874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.749911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.749939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.754143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.754180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.758400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.758438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.758466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.762689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.762726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.762754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.766845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.766881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.766909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.771145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.771183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.771212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.775355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.775392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.775420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.779650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.779689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.779717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.783916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.783953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.783983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.788123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.788160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.792332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.792370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.792398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.796465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.800702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.800742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.800756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.805165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.805226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.805257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.809601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.809640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.809654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.813981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.814019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.814049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.818458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.818530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.818545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.822900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.822939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.827369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.827407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.827436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.831943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.831981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.832009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.836290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.836328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.836357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.840620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.840658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.840687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.844876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.844914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.345 [2024-07-26 07:43:03.844943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.345 [2024-07-26 07:43:03.849268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.345 [2024-07-26 07:43:03.849309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.849322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.853412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.853452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.853481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.857407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.857447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.857460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.861475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.861546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.861560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.865588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.865641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.865669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.869884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.869922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.869951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.874184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.874222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.874250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.878340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.878378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.878406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.882722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.882762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.882790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.887083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.887122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.887151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.891374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.891414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.891442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.895719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.895757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.895785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.899965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.900004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.900032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.904293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.904335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.904349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.908487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.908524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.908552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.912701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.912739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.912767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.916822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.916860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.916888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.920926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.920964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.920992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.925377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.925419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.925433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.929591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.929630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.929673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.933879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.933917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.933945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.938128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.938166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.938195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.346 [2024-07-26 07:43:03.942516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.346 [2024-07-26 07:43:03.942576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.346 [2024-07-26 07:43:03.942592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.946989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.947030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.947059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.951492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.951528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.951556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.955715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.955752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.955781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.959933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.959971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.959999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.964189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.964228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.964256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.968363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.968401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.968429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.972704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.972771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.976780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.976817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.976845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.981035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.981088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.981116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.985245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.985304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.985318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.989344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.989383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.989396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.993503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.993558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.993572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.605 [2024-07-26 07:43:03.997648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.605 [2024-07-26 07:43:03.997700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.605 [2024-07-26 07:43:03.997729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.002089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.002128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.002158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.006405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.006444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.006472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.010736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.010775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.010804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.014959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.015026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.019362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.019402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.019431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.023766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.023805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.023834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.027971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.028008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.028036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.032182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.032219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.032247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.036347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.036384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.036413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.040552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.040588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.040617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.044594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.044630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.044658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.048602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.048639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.048667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.052662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.052727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.056718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.056755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.056783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.060892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.060930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.060958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.065014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.065051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.065080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.069094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.069130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.069158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.073290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.073343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.077463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.077532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.077545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.081581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.081620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.081634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.085690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.085727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.085756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.089858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.089895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.089923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.093854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.093889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.093917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.097989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.098027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.098055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.102081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.102123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.102151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.106287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.106324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.606 [2024-07-26 07:43:04.106352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.606 [2024-07-26 07:43:04.110486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.606 [2024-07-26 07:43:04.110534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.110562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.114613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.114649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.114677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.118844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.118880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.118908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.123071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.123108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.123136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.127298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.127335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.127364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.131550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.131587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.131615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.135742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.135779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.135808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.139891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.139929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.139957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.144026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.144063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.144091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.148156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.148192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.148222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.152378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.152415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.152443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.156513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.156549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.156576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.160515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.160550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.160578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.164615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.164651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.164679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.168684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.168721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.168750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.172784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.172820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.172848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.176836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.176872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.176900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.180921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.180986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.185054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.185091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.185119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.189122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.189159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.189186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.193342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.193383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.193396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.197503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.197572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.197585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.607 [2024-07-26 07:43:04.201732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.607 [2024-07-26 07:43:04.201786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.607 [2024-07-26 07:43:04.201815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.206168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.206205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.206234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.210427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.210511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.210542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.214849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.214887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.214915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.219240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.219278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.219307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.223589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.223626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.223655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.227783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.227820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.227848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.232095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.232133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.232163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.236420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.236459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.236517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.240706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.240761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.240775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.245161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.245197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.245220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.249504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.249554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.249567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.253784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.253820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.253832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.258164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.258200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.258213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.262567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.262603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.262617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.266919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.266967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.266980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.271584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.271619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.271631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.276096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.276145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.276157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.280586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.280629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.280642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.285173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.285247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.285261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.289504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.289538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.289552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.293904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.293952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.293965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.298216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.298265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.298277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.302390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.302439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.302451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.306904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.306953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.306965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.311268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.311317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.311329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.315557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.315605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.315617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.319802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.319851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.319863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.324085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.324134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.324146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.328299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.328348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.328360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.867 [2024-07-26 07:43:04.332609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.867 [2024-07-26 07:43:04.332659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.867 [2024-07-26 07:43:04.332671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.336790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.336839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.336852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.341105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.341154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.341166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.345426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.345461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.345491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.349751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.349798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.349812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.353980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.354028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.354040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.358331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.358380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.358392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.362654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.362703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.362715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.366900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.366948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.371130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.371179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.371191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.375551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.375600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.375612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.379702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.379749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.379761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.383958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.384007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.384020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.388264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.388313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.388326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.392609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.392659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.392671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.396830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.396879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.396906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.401291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.401329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.401342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.405519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.405569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.405581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.409802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.409851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.409863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.414106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.414155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.414168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.418525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.418572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.418585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.422856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.422904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.422917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.427158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.427206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.427218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.431497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.431545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.435685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.435734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.435746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.439913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.439962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.439974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.444168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.444201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.444228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.448352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.448401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.448413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.452564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.452613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.452625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.456762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.456811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.456823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.460922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.460971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.460983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.868 [2024-07-26 07:43:04.465519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:38.868 [2024-07-26 07:43:04.465554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.868 [2024-07-26 07:43:04.465567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.127 [2024-07-26 07:43:04.470051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.127 [2024-07-26 07:43:04.470085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.127 [2024-07-26 07:43:04.470113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.474608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.474668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.478913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.478963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.478976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.483173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.483222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.483234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.487492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.487540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.487552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.491762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.491811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.491824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.495988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.496049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.500443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.500519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.500532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.504880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.504956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.504969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.509161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.509234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.509249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.513374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.513409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.513422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.517688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.517737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.517749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.521987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.522036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.522048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.526228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.526277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.526289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.530534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.530582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.530595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.534804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.534852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.534864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.539021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.539070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.539082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.543307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.543355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.543367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.547780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.547829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.547841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.552263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.552329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.552343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.556657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.556706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.556719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.561118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.561169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.561182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.565262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.565298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.565310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.569578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.569626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.569639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.573904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.573952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.573964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.578136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.578185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.578196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.582419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.582469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.128 [2024-07-26 07:43:04.582493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.128 [2024-07-26 07:43:04.586699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.128 [2024-07-26 07:43:04.586748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.586760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.590957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.591006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.591018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.595300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.595348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.595360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.599608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.599655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.599667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.603900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.603949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.603962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.608150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.608198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.608210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.612489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.612538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.612551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.616588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.616635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.616647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.620822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.620871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.620883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.625076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.625124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.625136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.629426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.629462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.629490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.633747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.633795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.633808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.637997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.638046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.638058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.642337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.642386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.642399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.646677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.646726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.646755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.651088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.651138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.651150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.655350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.655400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.655412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.659575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.659624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.659637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.663797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.663847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.663859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.668088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.668137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.668149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.672611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.672659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.672672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.676965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.677002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.677015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.681198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.681258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.681271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.685454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.685501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.685515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.689572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.689606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.689618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.693834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.693882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.693894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.698268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.129 [2024-07-26 07:43:04.698304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.129 [2024-07-26 07:43:04.698317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.129 [2024-07-26 07:43:04.702585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.130 [2024-07-26 07:43:04.702633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.130 [2024-07-26 07:43:04.702645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.130 [2024-07-26 07:43:04.706848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.130 [2024-07-26 07:43:04.706897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.130 [2024-07-26 07:43:04.706909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.130 [2024-07-26 07:43:04.711214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.130 [2024-07-26 07:43:04.711264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.130 [2024-07-26 07:43:04.711276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.130 [2024-07-26 07:43:04.715463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.130 [2024-07-26 07:43:04.715522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.130 [2024-07-26 07:43:04.715535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.130 [2024-07-26 07:43:04.719711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.130 [2024-07-26 07:43:04.719760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.130 [2024-07-26 07:43:04.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.130 [2024-07-26 07:43:04.724175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.130 [2024-07-26 07:43:04.724211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.130 [2024-07-26 07:43:04.724223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.728567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.728616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.728628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.732676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.732725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.732737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.736821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.736870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.736882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.741018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.741066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.741078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.745353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.745388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.745401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.749628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.749692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.749704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.753961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.754010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.754024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.758338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.758387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.758399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.762596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.762644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.766817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.766865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.771021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.771069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.775403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.775452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.389 [2024-07-26 07:43:04.779629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.389 [2024-07-26 07:43:04.779677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.389 [2024-07-26 07:43:04.779689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.783841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.783889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.783902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.788042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.788091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.788103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.792219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.792268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.792280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.796461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.796536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.796549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.800666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.800715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.800728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.804777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.804826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.804838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.808965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.809015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.809027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.813187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.813245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.813259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.817526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.817589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.817601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.821712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.821760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.821772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.826002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.826051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.826062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.830270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.830319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.830331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.834581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.834629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.834641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.838791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.838839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.838851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.842969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.843017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.843030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.847335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.847384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.847396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.851677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.851725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.851737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.856038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.856101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.860233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.860282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.860293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.864566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.864614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.864626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.868702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.868764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.872917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.872966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.872978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.877276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.877312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.877324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.881553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.881586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.881598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.885812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.885861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.390 [2024-07-26 07:43:04.885873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.390 [2024-07-26 07:43:04.890007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.390 [2024-07-26 07:43:04.890056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.890068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.894257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.894306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.894319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.898557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.898605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.898618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.902734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.902782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.902794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.907077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.907126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.907138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.911383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.911432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.911444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.915698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.915746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.915759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.919891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.919939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.919951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.924220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.924272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.924284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.928462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.928538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.928551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.932778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.932829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.932842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.937105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.937155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.937168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.941380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.941416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.941429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.945756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.945806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.945818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.950132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.950182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.950194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.954610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.954658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.954671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.958912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.958961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.963197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.963247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.963259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.967502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.967550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.967562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.971799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.971848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.971860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.975977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.976026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.976038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.980089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.980138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.980150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.984235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.984284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.984296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.391 [2024-07-26 07:43:04.988734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.391 [2024-07-26 07:43:04.988785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.391 [2024-07-26 07:43:04.988798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:04.993159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:04.993216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:04.993245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:04.997768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:04.997816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:04.997828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.002038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.002088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.002100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.006372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.006421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.006434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.010874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.010923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.010935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.015356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.015406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.015418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.019686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.019734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.019747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.024190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.024241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.024253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.028730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.028780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.033085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.033135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.033147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.037594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.037629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.037642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.042146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.042183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.046638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.046673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.046686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.050999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.051046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.051059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.055445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.055525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.055538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.060030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.060091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.064380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.064428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.064440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.068775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.068811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.068824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.073083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.073132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.073145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.651 [2024-07-26 07:43:05.077428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.651 [2024-07-26 07:43:05.077463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.651 [2024-07-26 07:43:05.077491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.081688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.081735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.081747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.085876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.085925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.085937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.090134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.090183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.090195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.094430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.094489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.094502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.098848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.098911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.098924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.103068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.103117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.103129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.107357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.107406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.107418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.111914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.111964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.111977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.116148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.116197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.116209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.120510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.120559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.120571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.124645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.124692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.124705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.128831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.128868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.128880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.133173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.133246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.133260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.137721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.137771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.137783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.142044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.142093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.142105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.146268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.146318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.146330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.150686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.150734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.150746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.154951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.155000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.155012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.159236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.159285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.159297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.163619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.163669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.163682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.167931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.167968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.167980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.172128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.172177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.172189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.176566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.176615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.176627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.180765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.180813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.180825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.184945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.184994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.185006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.189408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.652 [2024-07-26 07:43:05.189444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.652 [2024-07-26 07:43:05.189456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.652 [2024-07-26 07:43:05.193618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.193651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.193663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.197809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.197858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.197870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.202066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.202115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.202128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.206421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.206470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.210643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.210691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.210702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.214848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.214897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.214909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.219084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.219134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.219146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.223426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.223475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.223514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.227861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.227915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.227928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.232088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.232137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.236385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.236436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.236448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.240645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.240695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.240707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.244941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.244991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.245005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.653 [2024-07-26 07:43:05.249426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.653 [2024-07-26 07:43:05.249462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.653 [2024-07-26 07:43:05.249491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.912 [2024-07-26 07:43:05.253816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.912 [2024-07-26 07:43:05.253878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.912 [2024-07-26 07:43:05.253890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.912 [2024-07-26 07:43:05.258179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.912 [2024-07-26 07:43:05.258228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.912 [2024-07-26 07:43:05.258240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.912 [2024-07-26 07:43:05.262381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.912 [2024-07-26 07:43:05.262430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.262441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.266817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.266866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.266894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.271239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.271303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.275856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.275906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.275920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.280449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.280526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.280539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.285019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.285067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.285079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.289486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.289532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.289545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.294115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.294163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.294175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.298620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.298671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.298684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.303116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.303165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.303176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.307521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.307569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.307581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.311647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.311694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.311706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.315791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.315838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.315850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.319876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.319925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.319937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.324315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.324362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.324375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.328485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.328531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.328543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.332655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.332704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.332717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.336731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.336790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.340894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.340944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.340957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.345051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.345100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.345111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.349274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.349310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.349322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.353490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.353535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.357706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.357753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.357765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.361871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.361919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.361931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.366106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.366155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.366167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.370310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.370359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.370371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.374693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.913 [2024-07-26 07:43:05.374742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.913 [2024-07-26 07:43:05.374754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.913 [2024-07-26 07:43:05.378962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.379011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.379023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.383166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.383215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.387370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.387419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.387431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.391612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.391660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.391672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.395742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.395791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.395802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.399892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.399941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.399953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.404124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.404173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.404184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.408356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.408405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.408417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.412604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.412652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.412664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.416754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.416815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.420826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.420875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.420888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.425061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.425111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.425123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.429243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.429279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.429291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.433452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.433498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.433512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.437800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.437849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.437860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.441969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.442030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.446160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.446208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.446219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.450412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.450461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.450472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.454585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.454633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.454645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.458786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.458835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.458862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.463052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.463101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.463113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.467370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.467419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.467431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.471566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.471614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.471625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.475702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.475750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.475762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.914 [2024-07-26 07:43:05.479896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d3f200) 00:17:39.914 [2024-07-26 07:43:05.479945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.914 [2024-07-26 07:43:05.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.914 00:17:39.914 Latency(us) 00:17:39.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.914 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:39.914 nvme0n1 : 2.00 7225.20 903.15 0.00 0.00 2211.11 1817.13 7596.22 00:17:39.914 =================================================================================================================== 00:17:39.915 Total : 7225.20 903.15 0.00 0.00 2211.11 1817.13 7596.22 00:17:39.915 0 00:17:39.915 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:39.915 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:39.915 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:39.915 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:39.915 | .driver_specific 00:17:39.915 | .nvme_error 00:17:39.915 | .status_code 00:17:39.915 | .command_transient_transport_error' 00:17:40.172 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 466 > 0 )) 00:17:40.172 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79954 00:17:40.172 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79954 ']' 00:17:40.172 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79954 00:17:40.172 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:40.173 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.173 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79954 00:17:40.430 killing process with pid 79954 00:17:40.430 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.430 00:17:40.430 Latency(us) 00:17:40.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.430 =================================================================================================================== 00:17:40.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.430 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:40.430 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:40.430 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79954' 00:17:40.430 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79954 00:17:40.430 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79954 00:17:40.688 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:40.688 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:40.688 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:40.688 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:40.688 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80009 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80009 /var/tmp/bperf.sock 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80009 ']' 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:40.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.689 07:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.689 [2024-07-26 07:43:06.136683] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:40.689 [2024-07-26 07:43:06.136773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80009 ] 00:17:40.689 [2024-07-26 07:43:06.267185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.947 [2024-07-26 07:43:06.399726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.947 [2024-07-26 07:43:06.472505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:41.512 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.512 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:41.512 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.512 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.771 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:41.771 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.771 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.771 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.771 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.771 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.029 nvme0n1 00:17:42.029 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:42.029 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.029 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.287 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.287 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:42.287 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:42.287 Running I/O for 2 seconds... 00:17:42.287 [2024-07-26 07:43:07.740268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fef90 00:17:42.287 [2024-07-26 07:43:07.742771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.742817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.755548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190feb58 00:17:42.287 [2024-07-26 07:43:07.758059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.758107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.770329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fe2e8 00:17:42.287 [2024-07-26 07:43:07.772793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.772840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.785109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fda78 00:17:42.287 [2024-07-26 07:43:07.787548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.787592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.800229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fd208 00:17:42.287 [2024-07-26 07:43:07.802663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.802710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.815229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fc998 00:17:42.287 [2024-07-26 07:43:07.817786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.817820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.831060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fc128 00:17:42.287 [2024-07-26 07:43:07.833514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.833551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.846633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fb8b8 00:17:42.287 [2024-07-26 07:43:07.849011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.849058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.861753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fb048 00:17:42.287 [2024-07-26 07:43:07.864037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.864084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:42.287 [2024-07-26 07:43:07.876725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fa7d8 00:17:42.287 [2024-07-26 07:43:07.878993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.287 [2024-07-26 07:43:07.879040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.892471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f9f68 00:17:42.546 [2024-07-26 07:43:07.894742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.894789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.907478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f96f8 00:17:42.546 [2024-07-26 07:43:07.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.909835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.922411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f8e88 00:17:42.546 [2024-07-26 07:43:07.924599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.924645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.937410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f8618 00:17:42.546 [2024-07-26 07:43:07.939593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.939640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.952300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f7da8 00:17:42.546 [2024-07-26 07:43:07.954689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.954735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.967328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f7538 00:17:42.546 [2024-07-26 07:43:07.969594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.969626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.982332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f6cc8 00:17:42.546 [2024-07-26 07:43:07.984515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.984559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:07.997123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f6458 00:17:42.546 [2024-07-26 07:43:07.999298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:07.999344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:08.012543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f5be8 00:17:42.546 [2024-07-26 07:43:08.014659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:08.014706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:42.546 [2024-07-26 07:43:08.027867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f5378 00:17:42.546 [2024-07-26 07:43:08.030043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.546 [2024-07-26 07:43:08.030090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.043082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f4b08 00:17:42.547 [2024-07-26 07:43:08.045169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.045222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.058104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f4298 00:17:42.547 [2024-07-26 07:43:08.060216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.060263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.073322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f3a28 00:17:42.547 [2024-07-26 07:43:08.075387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.075432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.088333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f31b8 00:17:42.547 [2024-07-26 07:43:08.090427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.090474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.103326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f2948 00:17:42.547 [2024-07-26 07:43:08.105435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.105477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.118399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f20d8 00:17:42.547 [2024-07-26 07:43:08.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:42.547 [2024-07-26 07:43:08.133430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f1868 00:17:42.547 [2024-07-26 07:43:08.135368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.547 [2024-07-26 07:43:08.135414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.149069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f0ff8 00:17:42.806 [2024-07-26 07:43:08.151249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.151296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.164375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f0788 00:17:42.806 [2024-07-26 07:43:08.166350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.166397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.179405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eff18 00:17:42.806 [2024-07-26 07:43:08.181368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.181401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.194468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ef6a8 00:17:42.806 [2024-07-26 07:43:08.196378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.196425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.209664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eee38 00:17:42.806 [2024-07-26 07:43:08.211536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.211584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.224707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ee5c8 00:17:42.806 [2024-07-26 07:43:08.226546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.226592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.239766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190edd58 00:17:42.806 [2024-07-26 07:43:08.241690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.241736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.254885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ed4e8 00:17:42.806 [2024-07-26 07:43:08.256683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.256731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.269863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ecc78 00:17:42.806 [2024-07-26 07:43:08.271658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.271706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.284942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ec408 00:17:42.806 [2024-07-26 07:43:08.286741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.286787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.300000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ebb98 00:17:42.806 [2024-07-26 07:43:08.301845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.301890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.314978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eb328 00:17:42.806 [2024-07-26 07:43:08.316709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.316755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.330350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eaab8 00:17:42.806 [2024-07-26 07:43:08.332145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.332192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.346572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ea248 00:17:42.806 [2024-07-26 07:43:08.348281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.348327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.362372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e99d8 00:17:42.806 [2024-07-26 07:43:08.364179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.364224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.377589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e9168 00:17:42.806 [2024-07-26 07:43:08.379262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.379307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:42.806 [2024-07-26 07:43:08.392738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e88f8 00:17:42.806 [2024-07-26 07:43:08.394395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.806 [2024-07-26 07:43:08.394442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:43.065 [2024-07-26 07:43:08.408073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e8088 00:17:43.065 [2024-07-26 07:43:08.409787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.065 [2024-07-26 07:43:08.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:43.065 [2024-07-26 07:43:08.423242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e7818 00:17:43.065 [2024-07-26 07:43:08.424907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.065 [2024-07-26 07:43:08.424951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:43.065 [2024-07-26 07:43:08.438275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e6fa8 00:17:43.065 [2024-07-26 07:43:08.439894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.065 [2024-07-26 07:43:08.439939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:43.065 [2024-07-26 07:43:08.453162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e6738 00:17:43.065 [2024-07-26 07:43:08.454820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.065 [2024-07-26 07:43:08.454866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:43.065 [2024-07-26 07:43:08.468394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e5ec8 00:17:43.066 [2024-07-26 07:43:08.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.470048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.483760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e5658 00:17:43.066 [2024-07-26 07:43:08.485348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.485382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.498762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e4de8 00:17:43.066 [2024-07-26 07:43:08.500275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.500323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.513772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e4578 00:17:43.066 [2024-07-26 07:43:08.515269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.515315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.528789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e3d08 00:17:43.066 [2024-07-26 07:43:08.530294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.530341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.543881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e3498 00:17:43.066 [2024-07-26 07:43:08.545394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.545426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.559133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e2c28 00:17:43.066 [2024-07-26 07:43:08.560615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.560659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.573891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e23b8 00:17:43.066 [2024-07-26 07:43:08.575314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.575360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.588659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e1b48 00:17:43.066 [2024-07-26 07:43:08.590109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.590154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.603304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e12d8 00:17:43.066 [2024-07-26 07:43:08.604729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.604775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.618005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e0a68 00:17:43.066 [2024-07-26 07:43:08.619378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.633816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e01f8 00:17:43.066 [2024-07-26 07:43:08.635166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.635213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.649216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190df988 00:17:43.066 [2024-07-26 07:43:08.650626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.650670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:43.066 [2024-07-26 07:43:08.664103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190df118 00:17:43.066 [2024-07-26 07:43:08.665501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.066 [2024-07-26 07:43:08.665532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.679374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190de8a8 00:17:43.325 [2024-07-26 07:43:08.680746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.680794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.695027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190de038 00:17:43.325 [2024-07-26 07:43:08.696339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.696385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.717595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190de038 00:17:43.325 [2024-07-26 07:43:08.720086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.720134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.732994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190de8a8 00:17:43.325 [2024-07-26 07:43:08.735434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.735488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.748195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190df118 00:17:43.325 [2024-07-26 07:43:08.750672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.750704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.763537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190df988 00:17:43.325 [2024-07-26 07:43:08.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.766020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.778716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e01f8 00:17:43.325 [2024-07-26 07:43:08.781072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.781119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.793670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e0a68 00:17:43.325 [2024-07-26 07:43:08.796052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.796081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.808586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e12d8 00:17:43.325 [2024-07-26 07:43:08.810892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.810938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.823817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e1b48 00:17:43.325 [2024-07-26 07:43:08.826196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.826240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.839032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e23b8 00:17:43.325 [2024-07-26 07:43:08.841322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.841354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.854039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e2c28 00:17:43.325 [2024-07-26 07:43:08.856267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.856312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.869035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e3498 00:17:43.325 [2024-07-26 07:43:08.871301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.871347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.884154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e3d08 00:17:43.325 [2024-07-26 07:43:08.886478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.886533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.899339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e4578 00:17:43.325 [2024-07-26 07:43:08.901689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.901733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:43.325 [2024-07-26 07:43:08.914212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e4de8 00:17:43.325 [2024-07-26 07:43:08.916444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.325 [2024-07-26 07:43:08.916498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:43.584 [2024-07-26 07:43:08.929709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e5658 00:17:43.584 [2024-07-26 07:43:08.931941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.584 [2024-07-26 07:43:08.931987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:43.584 [2024-07-26 07:43:08.944491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e5ec8 00:17:43.584 [2024-07-26 07:43:08.946617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.584 [2024-07-26 07:43:08.946662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:08.959195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e6738 00:17:43.585 [2024-07-26 07:43:08.961385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:08.961417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:08.973910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e6fa8 00:17:43.585 [2024-07-26 07:43:08.975978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:08.976023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:08.988584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e7818 00:17:43.585 [2024-07-26 07:43:08.990667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:08.990712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.003305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e8088 00:17:43.585 [2024-07-26 07:43:09.005461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.005501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.018195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e88f8 00:17:43.585 [2024-07-26 07:43:09.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.020339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.033049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e9168 00:17:43.585 [2024-07-26 07:43:09.035121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.035166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.047781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190e99d8 00:17:43.585 [2024-07-26 07:43:09.049873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.049918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.062516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ea248 00:17:43.585 [2024-07-26 07:43:09.064536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.064582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.077110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eaab8 00:17:43.585 [2024-07-26 07:43:09.079152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.079195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.091959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eb328 00:17:43.585 [2024-07-26 07:43:09.094027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.094073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.106753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ebb98 00:17:43.585 [2024-07-26 07:43:09.108674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.108720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.121352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ec408 00:17:43.585 [2024-07-26 07:43:09.123337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.123380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.136317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ecc78 00:17:43.585 [2024-07-26 07:43:09.138332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.138377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.151132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ed4e8 00:17:43.585 [2024-07-26 07:43:09.153052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.153096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.165739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190edd58 00:17:43.585 [2024-07-26 07:43:09.167602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.167647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:43.585 [2024-07-26 07:43:09.180241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ee5c8 00:17:43.585 [2024-07-26 07:43:09.182279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.585 [2024-07-26 07:43:09.182325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.195675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eee38 00:17:43.844 [2024-07-26 07:43:09.197577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.197610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.210347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190ef6a8 00:17:43.844 [2024-07-26 07:43:09.212204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.212250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.225035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190eff18 00:17:43.844 [2024-07-26 07:43:09.226854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.226899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.239690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f0788 00:17:43.844 [2024-07-26 07:43:09.241495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.241526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.254268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f0ff8 00:17:43.844 [2024-07-26 07:43:09.256077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.256120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.268856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f1868 00:17:43.844 [2024-07-26 07:43:09.270600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.270645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.283348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f20d8 00:17:43.844 [2024-07-26 07:43:09.285093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.285136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.298104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f2948 00:17:43.844 [2024-07-26 07:43:09.299826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.312687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f31b8 00:17:43.844 [2024-07-26 07:43:09.314382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.314427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.327206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f3a28 00:17:43.844 [2024-07-26 07:43:09.328889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.328948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.341959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f4298 00:17:43.844 [2024-07-26 07:43:09.343600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.343645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.357623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f4b08 00:17:43.844 [2024-07-26 07:43:09.359311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.373922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f5378 00:17:43.844 [2024-07-26 07:43:09.375524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.375556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.389911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f5be8 00:17:43.844 [2024-07-26 07:43:09.391519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.391572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.404922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f6458 00:17:43.844 [2024-07-26 07:43:09.406505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.406560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.419740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f6cc8 00:17:43.844 [2024-07-26 07:43:09.421323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.421356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.844 [2024-07-26 07:43:09.434496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f7538 00:17:43.844 [2024-07-26 07:43:09.436081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.844 [2024-07-26 07:43:09.436126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.450077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f7da8 00:17:44.103 [2024-07-26 07:43:09.451593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.451638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.464716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f8618 00:17:44.103 [2024-07-26 07:43:09.466238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.466283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.479389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f8e88 00:17:44.103 [2024-07-26 07:43:09.480882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.480927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.494011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f96f8 00:17:44.103 [2024-07-26 07:43:09.495473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.495524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.508615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190f9f68 00:17:44.103 [2024-07-26 07:43:09.510091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.510138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.523336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fa7d8 00:17:44.103 [2024-07-26 07:43:09.524814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.524859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.538146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fb048 00:17:44.103 [2024-07-26 07:43:09.539574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.539618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.552845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fb8b8 00:17:44.103 [2024-07-26 07:43:09.554252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.554297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.568115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fc128 00:17:44.103 [2024-07-26 07:43:09.569579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.103 [2024-07-26 07:43:09.569626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:44.103 [2024-07-26 07:43:09.583366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fc998 00:17:44.104 [2024-07-26 07:43:09.584799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.584831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.599206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fd208 00:17:44.104 [2024-07-26 07:43:09.600616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.600652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.614736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fda78 00:17:44.104 [2024-07-26 07:43:09.616099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.616147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.629997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fe2e8 00:17:44.104 [2024-07-26 07:43:09.631312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.631358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.645499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190feb58 00:17:44.104 [2024-07-26 07:43:09.646792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.646826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.667801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fef90 00:17:44.104 [2024-07-26 07:43:09.670312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.670360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.682981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190feb58 00:17:44.104 [2024-07-26 07:43:09.685442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.685485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:44.104 [2024-07-26 07:43:09.698087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fe2e8 00:17:44.104 [2024-07-26 07:43:09.700606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.104 [2024-07-26 07:43:09.700649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:44.362 [2024-07-26 07:43:09.713841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbdd650) with pdu=0x2000190fda78 00:17:44.362 [2024-07-26 07:43:09.716197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.362 [2024-07-26 07:43:09.716244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:44.362 00:17:44.362 Latency(us) 00:17:44.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.362 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.362 nvme0n1 : 2.01 16778.24 65.54 0.00 0.00 7622.85 3961.95 29312.47 00:17:44.362 =================================================================================================================== 00:17:44.362 Total : 16778.24 65.54 0.00 0.00 7622.85 3961.95 29312.47 00:17:44.362 0 00:17:44.362 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:44.362 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:44.362 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:44.362 | .driver_specific 00:17:44.362 | .nvme_error 00:17:44.362 | .status_code 00:17:44.362 | .command_transient_transport_error' 00:17:44.362 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80009 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80009 ']' 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80009 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80009 00:17:44.621 killing process with pid 80009 00:17:44.621 Received shutdown signal, test time was about 2.000000 seconds 00:17:44.621 00:17:44.621 Latency(us) 00:17:44.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.621 =================================================================================================================== 00:17:44.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80009' 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80009 00:17:44.621 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80009 00:17:44.879 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:44.879 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:44.879 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:44.879 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:44.879 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:44.879 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80069 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80069 /var/tmp/bperf.sock 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80069 ']' 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:44.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.880 07:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:44.880 [2024-07-26 07:43:10.381304] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:44.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.880 Zero copy mechanism will not be used. 00:17:44.880 [2024-07-26 07:43:10.381382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80069 ] 00:17:45.139 [2024-07-26 07:43:10.512454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.139 [2024-07-26 07:43:10.622310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.139 [2024-07-26 07:43:10.696197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:45.706 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.706 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:45.706 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.706 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.964 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:45.964 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.964 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.964 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.964 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.530 nvme0n1 00:17:46.530 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:46.530 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.530 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.530 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.530 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:46.530 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.530 Zero copy mechanism will not be used. 00:17:46.530 Running I/O for 2 seconds... 00:17:46.530 [2024-07-26 07:43:12.000699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.001013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.001044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.006023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.006348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.006381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.011207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.011542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.011567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.016311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.016635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.016665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.021430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.021746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.021776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.026441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.026754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.026783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.031407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.031734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.031763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.036471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.036779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.036806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.041655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.041948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.041976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.046748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.047037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.047065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.051895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.052189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.530 [2024-07-26 07:43:12.052217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.530 [2024-07-26 07:43:12.057014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.530 [2024-07-26 07:43:12.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.057350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.062146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.062435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.062463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.067308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.067648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.067677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.072390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.072698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.072725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.077464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.077780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.077808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.082498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.082831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.082860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.087588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.087905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.087932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.092768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.093057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.093085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.097948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.098274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.098305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.103004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.103297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.103326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.108152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.108454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.108492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.113261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.113565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.113588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.118425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.118774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.118803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.123559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.123857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.123885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.531 [2024-07-26 07:43:12.128749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.531 [2024-07-26 07:43:12.129060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.531 [2024-07-26 07:43:12.129090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.134092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.134411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.134440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.139357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.139694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.139718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.144461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.144762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.144790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.149675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.149964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.149992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.154741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.155051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.155078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.159870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.160159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.160187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.164955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.165269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.165297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.170032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.170324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.170352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.175081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.175399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.180127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.180416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.180445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.185060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.185382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.185411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.190072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.190361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.190390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.195074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.195363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.790 [2024-07-26 07:43:12.195390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.790 [2024-07-26 07:43:12.200106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.790 [2024-07-26 07:43:12.200398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.205122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.205477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.205502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.210144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.210430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.210458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.215197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.215479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.215519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.220189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.220471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.220510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.225127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.225475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.225514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.230229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.230529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.230567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.235276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.235590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.235618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.240275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.240587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.240615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.245196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.245534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.245577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.250193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.250473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.250526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.255172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.255454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.255491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.260123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.260404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.260432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.265045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.265356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.265378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.270048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.270377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.270406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.275076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.275360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.275388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.280039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.280326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.280353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.284958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.285269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.285296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.289950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.290235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.290262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.294923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.295204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.295230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.299999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.300283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.300310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.305070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.305413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.305442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.310297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.310625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.310654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.315632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.315941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.315969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.320761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.321067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.321096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.326000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.326322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.326350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.331230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.791 [2024-07-26 07:43:12.331539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.791 [2024-07-26 07:43:12.331568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.791 [2024-07-26 07:43:12.336628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.336946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.336974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.341979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.342268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.342296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.347276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.347599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.347628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.352428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.352762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.352790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.357659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.357960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.357988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.362966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.363264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.363292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.368100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.368398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.368426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.373178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.373508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.373537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.378263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.378589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.383252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.383600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.792 [2024-07-26 07:43:12.388438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:46.792 [2024-07-26 07:43:12.388769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.792 [2024-07-26 07:43:12.388812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.393742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.394035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.394063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.399124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.399420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.399449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.404257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.404589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.404618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.409393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.409709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.409738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.414516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.414835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.414863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.419729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.420020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.420049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.424968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.425271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.425294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.050 [2024-07-26 07:43:12.430252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.050 [2024-07-26 07:43:12.430592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.050 [2024-07-26 07:43:12.430621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.435471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.435792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.435820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.440847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.441178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.441215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.446181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.446470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.446542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.451523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.451829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.451886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.456800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.457098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.457127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.461853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.462169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.462197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.466952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.467262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.467290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.472074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.472362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.472390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.477312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.477607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.477635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.482384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.482695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.482724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.487335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.487660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.487689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.492409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.492720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.492747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.497530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.497824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.497866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.502555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.502851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.502879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.507580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.507876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.507903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.512616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.512941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.517667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.517978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.518004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.522694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.522983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.523010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.527757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.528031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.528059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.532794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.533088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.533117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.537966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.538257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.538285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.543016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.543298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.543325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.548187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.548477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.548514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.553101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.553460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.553499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.558158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.558442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.558480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.563123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.563406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.051 [2024-07-26 07:43:12.563433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.051 [2024-07-26 07:43:12.568088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.051 [2024-07-26 07:43:12.568370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.568397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.573004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.573319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.573348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.577952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.578235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.578262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.582887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.583169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.583196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.587892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.588174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.592952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.593271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.593299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.598003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.598297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.598324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.602963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.603253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.603280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.607991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.608275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.608303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.612917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.613199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.613252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.617937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.618221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.618250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.622917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.623200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.623227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.627906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.628190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.628218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.632816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.633100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.633127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.637784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.638081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.638108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.642781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.643063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.643090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.052 [2024-07-26 07:43:12.647804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.052 [2024-07-26 07:43:12.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.052 [2024-07-26 07:43:12.648129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.311 [2024-07-26 07:43:12.653037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.311 [2024-07-26 07:43:12.653351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.311 [2024-07-26 07:43:12.653379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.311 [2024-07-26 07:43:12.658226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.311 [2024-07-26 07:43:12.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.311 [2024-07-26 07:43:12.658549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.663170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.663453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.663488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.668130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.668417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.668444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.673015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.673337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.673365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.678105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.678388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.678416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.683134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.683420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.683447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.688052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.688333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.688360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.693001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.693311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.693339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.697946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.698232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.698259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.703035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.703347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.703376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.708101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.708412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.708440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.713337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.713643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.713672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.718586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.718889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.718917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.723690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.723972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.724001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.728649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.728930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.728957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.733633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.733949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.733975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.738680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.738962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.743710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.743993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.744020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.748665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.748949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.748976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.753669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.753968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.758658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.758941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.758968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.763633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.763915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.763942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.768530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.768815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.768841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.773542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.773856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.773883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.778561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.778845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.778871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.783482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.783764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.783790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.788462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.788763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.312 [2024-07-26 07:43:12.788789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.312 [2024-07-26 07:43:12.793568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.312 [2024-07-26 07:43:12.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.798493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.798787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.798813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.803420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.803713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.803740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.808364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.808680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.808708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.813334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.813642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.813671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.818310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.818633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.818662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.823322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.823643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.823671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.828306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.828605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.828632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.833200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.833547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.833575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.838145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.838426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.838453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.843097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.843382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.843408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.848044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.848357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.853036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.853393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.857957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.858238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.858265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.862926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.863207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.863234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.867870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.868151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.868177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.872847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.873129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.873157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.877783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.878079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.878106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.882816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.883106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.883133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.887757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.888037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.888065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.892753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.893035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.893063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.897601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.897890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.902537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.902818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.902845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.313 [2024-07-26 07:43:12.907418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.313 [2024-07-26 07:43:12.907713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.313 [2024-07-26 07:43:12.907740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.912641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.912930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.912957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.917635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.917998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.918025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.922655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.922939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.922965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.927623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.927931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.932698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.932991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.933019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.937721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.938027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.938054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.942818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.943123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.943151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.947860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.948153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.948180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.952824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.953116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.953159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.957915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.958204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.958232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.962963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.963252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.963280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.967881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.968170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.968198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.972887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.973180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.973232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.977861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.978151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.978178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.982888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.983177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.983205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.987971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.988261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.988288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.993009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.993335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.993363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:12.998033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:12.998322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:12.998349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:13.003164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:13.003447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:13.003468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:13.008196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:13.008528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:13.008568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:13.013172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:13.013534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:13.013563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:13.018159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:13.018439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:13.018478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:13.023225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:13.023518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:13.023545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.573 [2024-07-26 07:43:13.028198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.573 [2024-07-26 07:43:13.028479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.573 [2024-07-26 07:43:13.028517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.033235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.033525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.033554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.038198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.038480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.038518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.043218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.043502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.043539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.048164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.048445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.048480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.053096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.053417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.053445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.058113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.058396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.058423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.063055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.063338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.063364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.068083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.068371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.068398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.073074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.073405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.073433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.078102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.078391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.078418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.082982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.083264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.083292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.087960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.088250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.088277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.092942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.093298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.093326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.098074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.098363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.098391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.103099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.103424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.103453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.108104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.108387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.108414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.113073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.113414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.113443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.118035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.118382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.118410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.123049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.123329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.123356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.128047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.128348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.128376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.132952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.133261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.133288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.137901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.138187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.138215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.142861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.143168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.143195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.147809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.148091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.148118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.152730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.153013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.153040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.157662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.574 [2024-07-26 07:43:13.157965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.574 [2024-07-26 07:43:13.157991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.574 [2024-07-26 07:43:13.162724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.575 [2024-07-26 07:43:13.163009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.575 [2024-07-26 07:43:13.163036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.575 [2024-07-26 07:43:13.167688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.575 [2024-07-26 07:43:13.167972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.575 [2024-07-26 07:43:13.167999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.575 [2024-07-26 07:43:13.172907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.575 [2024-07-26 07:43:13.173255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.575 [2024-07-26 07:43:13.173284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.177971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.178292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.178319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.183029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.183310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.183337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.187920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.188202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.188229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.192846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.193159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.193187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.197817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.198085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.198112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.202774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.203055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.203083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.207848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.208136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.208163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.212859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.213166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.213194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.217975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.218268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.218295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.223006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.223298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.223326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.228071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.834 [2024-07-26 07:43:13.228360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.834 [2024-07-26 07:43:13.228388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.834 [2024-07-26 07:43:13.233241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.233549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.233577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.238405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.238736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.238765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.243696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.244007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.244036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.248848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.249157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.249185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.254041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.254339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.254367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.259253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.259558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.259587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.264421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.264735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.264764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.269485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.269777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.269805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.274588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.274877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.274904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.279615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.279904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.279931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.284697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.284989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.285016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.289875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.290163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.290191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.294989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.295280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.295308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.300055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.300344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.300372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.305057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.305399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.305439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.310143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.310435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.310462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.315199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.315505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.315532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.320232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.320537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.320565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.325351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.325661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.325689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.330361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.330688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.330716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.335390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.335691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.335719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.340413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.340713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.340740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.345519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.345819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.345863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.350633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.350928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.350956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.355637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.355928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.355955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.360673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.360964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.835 [2024-07-26 07:43:13.360991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.835 [2024-07-26 07:43:13.365756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.835 [2024-07-26 07:43:13.366061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.366089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.370829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.371119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.371147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.375891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.376180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.376208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.380918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.381218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.381262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.386045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.386376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.391129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.391431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.391459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.396259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.396573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.396600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.401364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.401704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.401733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.406422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.406739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.406766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.411411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.411716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.411743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.416437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.416743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.416771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.421504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.421795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.421823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.426591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.426908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.836 [2024-07-26 07:43:13.431815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:47.836 [2024-07-26 07:43:13.432127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.836 [2024-07-26 07:43:13.432156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.437096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.437424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.437453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.442495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.442801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.442829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.447805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.448100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.448128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.453060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.453390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.453413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.458308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.458637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.458665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.463564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.463856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.463899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.468772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.469097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.469124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.474019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.474308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.474335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.479182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.479470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.479508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.484182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.484470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.484508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.489070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.489390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.489418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.494266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.494589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.494616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.499423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.499742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.499771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.504429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.504731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.504758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.509347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.509666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.509694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.514347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.514673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.514701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.519340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.519661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.519689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.524335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.524661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.524688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.529359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.529664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.529692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.534339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.534664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.094 [2024-07-26 07:43:13.534692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.094 [2024-07-26 07:43:13.539637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.094 [2024-07-26 07:43:13.539970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.539998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.544945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.545260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.545288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.550163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.550479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.550502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.555424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.555761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.555791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.560594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.560883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.560911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.565891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.566200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.566228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.571088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.571405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.571434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.576310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.576615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.576644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.581522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.581821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.581848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.586730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.587021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.587050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.591857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.592164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.592193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.596995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.597322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.597350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.602187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.602487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.602526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.607466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.607802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.607830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.612623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.612914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.612942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.617859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.618170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.618199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.623066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.623376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.623405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.628317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.628635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.628663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.633471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.633779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.633807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.638605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.638903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.638930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.643690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.643988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.644016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.648866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.649179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.649214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.654101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.654426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.659432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.659778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.659807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.664620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.664940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.664968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.669902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.670200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.670228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.675063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.675362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.675390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.680115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.680412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.680440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.095 [2024-07-26 07:43:13.685168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.095 [2024-07-26 07:43:13.685517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.095 [2024-07-26 07:43:13.685545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.096 [2024-07-26 07:43:13.690310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.096 [2024-07-26 07:43:13.690638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.096 [2024-07-26 07:43:13.690666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.695625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.695943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.695971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.700928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.701256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.701284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.706073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.706365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.706393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.711266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.711588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.711617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.716325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.716653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.716682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.721451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.721767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.721795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.726654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.726960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.726988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.731825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.732117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.732145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.737037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.737339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.737367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.742211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.742533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.742561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.747362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.747695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.747724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.752524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.752828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.752856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.757700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.757996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.758023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.762901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.763201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.354 [2024-07-26 07:43:13.763229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.354 [2024-07-26 07:43:13.768042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.354 [2024-07-26 07:43:13.768340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.768368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.773176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.773497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.773525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.778278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.778598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.778626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.783367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.783702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.783730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.788464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.788769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.788796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.793633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.793938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.793965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.798762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.799050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.799077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.803927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.804221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.804248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.808951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.809267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.809294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.814051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.814345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.814372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.819108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.819396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.819423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.824209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.824517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.824544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.829243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.829561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.829590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.834308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.834629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.834657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.839379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.839701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.839729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.844411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.844749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.844777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.849640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.849971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.849999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.854917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.855208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.855235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.859972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.860262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.860289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.865194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.865555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.865598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.870364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.870685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.870713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.875355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.875682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.875709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.880403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.880710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.880737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.885420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.885744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.885772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.890540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.890829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.890855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.895618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.895929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.895956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.900686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.355 [2024-07-26 07:43:13.900975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.355 [2024-07-26 07:43:13.901001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.355 [2024-07-26 07:43:13.905818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.906137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.910868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.911155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.911183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.915937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.916229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.920892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.921181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.921217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.925959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.926263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.931006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.931293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.931321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.936123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.936411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.941067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.941408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.941437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.946219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.946508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.946546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.356 [2024-07-26 07:43:13.951292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.356 [2024-07-26 07:43:13.951643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.356 [2024-07-26 07:43:13.951671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.956658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.956947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.956974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.961956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.962244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.962272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.967008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.967301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.967328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.972133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.972424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.972451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.977104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.977449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.977486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.982265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.982565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.982592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.615 [2024-07-26 07:43:13.987228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xda1080) with pdu=0x2000190fef90 00:17:48.615 [2024-07-26 07:43:13.987513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.615 [2024-07-26 07:43:13.987540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.615 00:17:48.615 Latency(us) 00:17:48.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.615 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:48.615 nvme0n1 : 2.00 6074.67 759.33 0.00 0.00 2627.93 2219.29 11796.48 00:17:48.615 =================================================================================================================== 00:17:48.615 Total : 6074.67 759.33 0.00 0.00 2627.93 2219.29 11796.48 00:17:48.615 0 00:17:48.615 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:48.615 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:48.615 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:48.615 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:48.615 | .driver_specific 00:17:48.615 | .nvme_error 00:17:48.615 | .status_code 00:17:48.615 | .command_transient_transport_error' 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 392 > 0 )) 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80069 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80069 ']' 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80069 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80069 00:17:48.873 killing process with pid 80069 00:17:48.873 Received shutdown signal, test time was about 2.000000 seconds 00:17:48.873 00:17:48.873 Latency(us) 00:17:48.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.873 =================================================================================================================== 00:17:48.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80069' 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80069 00:17:48.873 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80069 00:17:49.131 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79862 00:17:49.131 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79862 ']' 00:17:49.131 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79862 00:17:49.131 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79862 00:17:49.132 killing process with pid 79862 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79862' 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79862 00:17:49.132 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79862 00:17:49.390 ************************************ 00:17:49.390 END TEST nvmf_digest_error 00:17:49.390 ************************************ 00:17:49.390 00:17:49.390 real 0m18.636s 00:17:49.390 user 0m35.708s 00:17:49.390 sys 0m4.737s 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.390 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.649 rmmod nvme_tcp 00:17:49.649 rmmod nvme_fabrics 00:17:49.649 rmmod nvme_keyring 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79862 ']' 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79862 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79862 ']' 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79862 00:17:49.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79862) - No such process 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79862 is not found' 00:17:49.649 Process with pid 79862 is not found 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:49.649 00:17:49.649 real 0m38.362s 00:17:49.649 user 1m12.064s 00:17:49.649 sys 0m9.978s 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 ************************************ 00:17:49.649 END TEST nvmf_digest 00:17:49.649 ************************************ 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 ************************************ 00:17:49.649 START TEST nvmf_host_multipath 00:17:49.649 ************************************ 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:49.649 * Looking for test storage... 00:17:49.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.649 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:49.650 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:49.908 Cannot find device "nvmf_tgt_br" 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.908 Cannot find device "nvmf_tgt_br2" 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:49.908 Cannot find device "nvmf_tgt_br" 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:49.908 Cannot find device "nvmf_tgt_br2" 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.908 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:50.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:50.166 00:17:50.166 --- 10.0.0.2 ping statistics --- 00:17:50.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.166 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:50.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:50.166 00:17:50.166 --- 10.0.0.3 ping statistics --- 00:17:50.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.166 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:50.166 00:17:50.166 --- 10.0.0.1 ping statistics --- 00:17:50.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.166 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.166 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80334 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80334 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80334 ']' 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.167 07:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.167 [2024-07-26 07:43:15.635202] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:17:50.167 [2024-07-26 07:43:15.635305] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.424 [2024-07-26 07:43:15.770816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:50.424 [2024-07-26 07:43:15.895946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.424 [2024-07-26 07:43:15.896026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.424 [2024-07-26 07:43:15.896054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.424 [2024-07-26 07:43:15.896062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.424 [2024-07-26 07:43:15.896070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.424 [2024-07-26 07:43:15.896429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.424 [2024-07-26 07:43:15.896465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.424 [2024-07-26 07:43:15.968216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80334 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.357 [2024-07-26 07:43:16.863424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.357 07:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:51.615 Malloc0 00:17:51.615 07:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:51.874 07:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.132 07:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.390 [2024-07-26 07:43:17.917135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.390 07:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:52.648 [2024-07-26 07:43:18.137182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80384 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80384 /var/tmp/bdevperf.sock 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80384 ']' 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.648 07:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.583 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.583 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:53.583 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:53.841 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:54.099 Nvme0n1 00:17:54.099 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:54.666 Nvme0n1 00:17:54.666 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:54.666 07:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:55.600 07:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:55.600 07:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:55.860 07:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:56.136 07:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:56.136 07:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80429 00:17:56.136 07:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:56.136 07:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.711 Attaching 4 probes... 00:18:02.711 @path[10.0.0.2, 4421]: 18103 00:18:02.711 @path[10.0.0.2, 4421]: 18855 00:18:02.711 @path[10.0.0.2, 4421]: 18472 00:18:02.711 @path[10.0.0.2, 4421]: 18744 00:18:02.711 @path[10.0.0.2, 4421]: 18575 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80429 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:02.711 07:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:02.711 07:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:02.711 07:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80547 00:18:02.711 07:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:02.711 07:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:09.286 Attaching 4 probes... 00:18:09.286 @path[10.0.0.2, 4420]: 18629 00:18:09.286 @path[10.0.0.2, 4420]: 18761 00:18:09.286 @path[10.0.0.2, 4420]: 18795 00:18:09.286 @path[10.0.0.2, 4420]: 19015 00:18:09.286 @path[10.0.0.2, 4420]: 18763 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80547 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:09.286 07:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:09.544 07:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:09.544 07:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:09.544 07:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80660 00:18:09.544 07:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.122 Attaching 4 probes... 00:18:16.122 @path[10.0.0.2, 4421]: 14604 00:18:16.122 @path[10.0.0.2, 4421]: 18298 00:18:16.122 @path[10.0.0.2, 4421]: 18570 00:18:16.122 @path[10.0.0.2, 4421]: 18212 00:18:16.122 @path[10.0.0.2, 4421]: 18501 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80660 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:16.122 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:16.381 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:16.381 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80772 00:18:16.381 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:16.381 07:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:22.941 07:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:22.941 07:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.941 Attaching 4 probes... 00:18:22.941 00:18:22.941 00:18:22.941 00:18:22.941 00:18:22.941 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80772 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:22.941 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:23.200 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:23.200 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80889 00:18:23.200 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:23.200 07:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.760 Attaching 4 probes... 00:18:29.760 @path[10.0.0.2, 4421]: 17478 00:18:29.760 @path[10.0.0.2, 4421]: 18173 00:18:29.760 @path[10.0.0.2, 4421]: 17948 00:18:29.760 @path[10.0.0.2, 4421]: 18104 00:18:29.760 @path[10.0.0.2, 4421]: 18038 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80889 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.760 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:29.760 [2024-07-26 07:43:55.141457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ba380 is same with the state(5) to be set 00:18:29.760 [2024-07-26 07:43:55.141538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ba380 is same with the state(5) to be set 00:18:29.760 [2024-07-26 07:43:55.141551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ba380 is same with the state(5) to be set 00:18:29.760 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:30.695 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:30.695 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81008 00:18:30.695 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:30.695 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.253 Attaching 4 probes... 00:18:37.253 @path[10.0.0.2, 4420]: 17715 00:18:37.253 @path[10.0.0.2, 4420]: 18105 00:18:37.253 @path[10.0.0.2, 4420]: 18106 00:18:37.253 @path[10.0.0.2, 4420]: 18137 00:18:37.253 @path[10.0.0.2, 4420]: 18276 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81008 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:37.253 [2024-07-26 07:44:02.682110] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:37.253 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:37.512 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:44.067 07:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:44.067 07:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81187 00:18:44.067 07:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:44.067 07:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:49.352 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.352 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:49.611 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:49.611 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.869 Attaching 4 probes... 00:18:49.869 @path[10.0.0.2, 4421]: 17521 00:18:49.869 @path[10.0.0.2, 4421]: 17860 00:18:49.869 @path[10.0.0.2, 4421]: 18047 00:18:49.869 @path[10.0.0.2, 4421]: 17762 00:18:49.869 @path[10.0.0.2, 4421]: 18057 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81187 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80384 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80384 ']' 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80384 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80384 00:18:49.869 killing process with pid 80384 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80384' 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80384 00:18:49.869 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80384 00:18:49.869 Connection closed with partial response: 00:18:49.869 00:18:49.869 00:18:50.139 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80384 00:18:50.139 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:50.139 [2024-07-26 07:43:18.212753] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:18:50.139 [2024-07-26 07:43:18.213393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80384 ] 00:18:50.139 [2024-07-26 07:43:18.355984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.139 [2024-07-26 07:43:18.484790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.139 [2024-07-26 07:43:18.560204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:50.139 Running I/O for 90 seconds... 00:18:50.139 [2024-07-26 07:43:28.195898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.195982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-07-26 07:43:28.196303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-07-26 07:43:28.196338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-07-26 07:43:28.196375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.139 [2024-07-26 07:43:28.196395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-07-26 07:43:28.196427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.196932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.196974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.196995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-07-26 07:43:28.197615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-07-26 07:43:28.197956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.140 [2024-07-26 07:43:28.197976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.197991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.198511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.198980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.198994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-07-26 07:43:28.199251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.199299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-07-26 07:43:28.199335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.141 [2024-07-26 07:43:28.199356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.199371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.199406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.199442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.199493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.199530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.199566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.199967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.199988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.200003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.200243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.200258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.201799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-07-26 07:43:28.201831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.201859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.201876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.201898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.201914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.201935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.201950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.201972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.201986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-07-26 07:43:28.202440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.142 [2024-07-26 07:43:28.202462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:28.202492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:28.202515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:28.202530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:28.202551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:28.202566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:28.202593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:28.202610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.768698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.768759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.768836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.768856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.768878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.768893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.768928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.768942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.768961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.768999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.769409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.769978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.769998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.770031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.143 [2024-07-26 07:43:34.770045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.770064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.770077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.770097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-07-26 07:43:34.770111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.143 [2024-07-26 07:43:34.770131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.770144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.770177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.770210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.770243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.770276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.770309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.770979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.770999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.144 [2024-07-26 07:43:34.771012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.771032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.771045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.144 [2024-07-26 07:43:34.771065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-07-26 07:43:34.771078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.771809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.771975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.771994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.772007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.772040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.145 [2024-07-26 07:43:34.772073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.145 [2024-07-26 07:43:34.772504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.145 [2024-07-26 07:43:34.772526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.772540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.772574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.772615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.772650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.772926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.773725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.146 [2024-07-26 07:43:34.773752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.773786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.773802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.773831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.773846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.773874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.773889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.773932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.773947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.773976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.773990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:34.774498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:34.774517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.146 [2024-07-26 07:43:41.866605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.146 [2024-07-26 07:43:41.866619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.866677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.866712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.866745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.866782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.866815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.866849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.866881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.866914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.866947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.866967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.866980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.867388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.867965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.147 [2024-07-26 07:43:41.867979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.868011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.147 [2024-07-26 07:43:41.868027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.147 [2024-07-26 07:43:41.868048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.148 [2024-07-26 07:43:41.868268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.868985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.868999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.869020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.869034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.869055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.148 [2024-07-26 07:43:41.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.148 [2024-07-26 07:43:41.869099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.149 [2024-07-26 07:43:41.869796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.149 [2024-07-26 07:43:41.869942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.149 [2024-07-26 07:43:41.869962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.869976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.870010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.870078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.870628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.870642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:41.871299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:41.871729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:41.871749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.150 [2024-07-26 07:43:55.142661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:55.142696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.150 [2024-07-26 07:43:55.142731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.150 [2024-07-26 07:43:55.142752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.142766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.142786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.142800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.142821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.142835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.142856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.142870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.142890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.142905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.142926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.142940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.142991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.151 [2024-07-26 07:43:55.143341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.151 [2024-07-26 07:43:55.143985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.151 [2024-07-26 07:43:55.143998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.144785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.144973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.144987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.145001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.152 [2024-07-26 07:43:55.145029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.145057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.145085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.145114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.145142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.152 [2024-07-26 07:43:55.145170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.152 [2024-07-26 07:43:55.145185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.153 [2024-07-26 07:43:55.145530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.153 [2024-07-26 07:43:55.145734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1389680 is same with the state(5) to be set 00:18:50.153 [2024-07-26 07:43:55.145766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.145777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.145788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10808 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.145801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.145842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.145853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.145866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.145889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.145899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11208 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.145912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.145934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.145945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11216 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.145958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.145971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.145980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.145990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11224 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.146003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.146016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.146026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.146036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.146049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.146062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.146071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11240 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.146094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.146114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.146124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.146134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11248 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.146147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.146160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.146169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.153 [2024-07-26 07:43:55.146180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11256 len:8 PRP1 0x0 PRP2 0x0 00:18:50.153 [2024-07-26 07:43:55.146192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.153 [2024-07-26 07:43:55.146211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.153 [2024-07-26 07:43:55.146226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11272 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11280 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11288 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11304 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11312 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11320 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11336 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11344 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.154 [2024-07-26 07:43:55.146757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.154 [2024-07-26 07:43:55.146767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11352 len:8 PRP1 0x0 PRP2 0x0 00:18:50.154 [2024-07-26 07:43:55.146779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146847] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1389680 was disconnected and freed. reset controller. 00:18:50.154 [2024-07-26 07:43:55.146955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.154 [2024-07-26 07:43:55.146980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.146996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.154 [2024-07-26 07:43:55.147009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.147034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.154 [2024-07-26 07:43:55.147047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.147061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.154 [2024-07-26 07:43:55.147074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.147089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.154 [2024-07-26 07:43:55.147103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.154 [2024-07-26 07:43:55.147123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130b100 is same with the state(5) to be set 00:18:50.154 [2024-07-26 07:43:55.148274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.154 [2024-07-26 07:43:55.148314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130b100 (9): Bad file descriptor 00:18:50.154 [2024-07-26 07:43:55.148728] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.154 [2024-07-26 07:43:55.148761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130b100 with addr=10.0.0.2, port=4421 00:18:50.154 [2024-07-26 07:43:55.148779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130b100 is same with the state(5) to be set 00:18:50.154 [2024-07-26 07:43:55.148838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130b100 (9): Bad file descriptor 00:18:50.154 [2024-07-26 07:43:55.148880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:50.154 [2024-07-26 07:43:55.148896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:50.154 [2024-07-26 07:43:55.148911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.154 [2024-07-26 07:43:55.148943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:50.154 [2024-07-26 07:43:55.148959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.154 [2024-07-26 07:44:05.201672] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.154 Received shutdown signal, test time was about 55.189592 seconds 00:18:50.154 00:18:50.154 Latency(us) 00:18:50.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.154 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.154 Verification LBA range: start 0x0 length 0x4000 00:18:50.154 Nvme0n1 : 55.19 7747.88 30.27 0.00 0.00 16487.30 808.03 7046430.72 00:18:50.154 =================================================================================================================== 00:18:50.154 Total : 7747.88 30.27 0.00 0.00 16487.30 808.03 7046430.72 00:18:50.154 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.413 rmmod nvme_tcp 00:18:50.413 rmmod nvme_fabrics 00:18:50.413 rmmod nvme_keyring 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80334 ']' 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80334 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80334 ']' 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80334 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80334 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80334' 00:18:50.413 killing process with pid 80334 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80334 00:18:50.413 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80334 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:50.982 00:18:50.982 real 1m1.190s 00:18:50.982 user 2m48.704s 00:18:50.982 sys 0m18.815s 00:18:50.982 ************************************ 00:18:50.982 END TEST nvmf_host_multipath 00:18:50.982 ************************************ 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.982 ************************************ 00:18:50.982 START TEST nvmf_timeout 00:18:50.982 ************************************ 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:50.982 * Looking for test storage... 00:18:50.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.982 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:50.983 Cannot find device "nvmf_tgt_br" 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.983 Cannot find device "nvmf_tgt_br2" 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:50.983 Cannot find device "nvmf_tgt_br" 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:50.983 Cannot find device "nvmf_tgt_br2" 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:50.983 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:51.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:51.242 00:18:51.242 --- 10.0.0.2 ping statistics --- 00:18:51.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.242 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:51.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:51.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:51.242 00:18:51.242 --- 10.0.0.3 ping statistics --- 00:18:51.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.242 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:51.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:51.242 00:18:51.242 --- 10.0.0.1 ping statistics --- 00:18:51.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.242 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81498 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81498 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81498 ']' 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.242 07:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.501 [2024-07-26 07:44:16.880031] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:18:51.501 [2024-07-26 07:44:16.880119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.501 [2024-07-26 07:44:17.015489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.759 [2024-07-26 07:44:17.140469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.759 [2024-07-26 07:44:17.140601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.759 [2024-07-26 07:44:17.140613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.759 [2024-07-26 07:44:17.140622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.759 [2024-07-26 07:44:17.140630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.759 [2024-07-26 07:44:17.140721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.759 [2024-07-26 07:44:17.140734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.760 [2024-07-26 07:44:17.212645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.325 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.326 07:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:52.584 [2024-07-26 07:44:18.179526] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.842 07:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:53.100 Malloc0 00:18:53.100 07:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:53.358 07:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.358 07:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.616 [2024-07-26 07:44:19.152707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.616 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:53.616 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81547 00:18:53.616 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81547 /var/tmp/bdevperf.sock 00:18:53.617 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81547 ']' 00:18:53.617 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.617 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.617 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.617 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.617 07:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.617 [2024-07-26 07:44:19.212406] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:18:53.617 [2024-07-26 07:44:19.212523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81547 ] 00:18:53.875 [2024-07-26 07:44:19.347775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.875 [2024-07-26 07:44:19.451344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.134 [2024-07-26 07:44:19.524114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:54.701 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.701 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:54.701 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:54.959 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:55.217 NVMe0n1 00:18:55.217 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81565 00:18:55.217 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.217 07:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:55.476 Running I/O for 10 seconds... 00:18:56.413 07:44:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.413 [2024-07-26 07:44:21.942537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.413 [2024-07-26 07:44:21.942986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.942995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.943533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ec750 is same with the state(5) to be set 00:18:56.414 [2024-07-26 07:44:21.944210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.944881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.944893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.414 [2024-07-26 07:44:21.945091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.414 [2024-07-26 07:44:21.945522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.945690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.945701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.946588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.946597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.947869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.947878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.415 [2024-07-26 07:44:21.948426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.415 [2024-07-26 07:44:21.948436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.948447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.948588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.948600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.948611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.948621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.948632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.948769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.949801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.949955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.950929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.950939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.416 [2024-07-26 07:44:21.951685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.416 [2024-07-26 07:44:21.951696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.952573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.952985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.417 [2024-07-26 07:44:21.953774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.953786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.953795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.417 [2024-07-26 07:44:21.954595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.417 [2024-07-26 07:44:21.954833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.418 [2024-07-26 07:44:21.954851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.954863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.418 [2024-07-26 07:44:21.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.954884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.418 [2024-07-26 07:44:21.954894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.954905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.418 [2024-07-26 07:44:21.954915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.954926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.418 [2024-07-26 07:44:21.955067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.418 [2024-07-26 07:44:21.955179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b1b0 is same with the state(5) to be set 00:18:56.418 [2024-07-26 07:44:21.955204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:56.418 [2024-07-26 07:44:21.955212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:56.418 [2024-07-26 07:44:21.955227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:18:56.418 [2024-07-26 07:44:21.955236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955529] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x162b1b0 was disconnected and freed. reset controller. 00:18:56.418 [2024-07-26 07:44:21.955758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.418 [2024-07-26 07:44:21.955843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.418 [2024-07-26 07:44:21.955864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.418 [2024-07-26 07:44:21.955884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.418 [2024-07-26 07:44:21.955903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.418 [2024-07-26 07:44:21.955912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bad40 is same with the state(5) to be set 00:18:56.418 [2024-07-26 07:44:21.956361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.418 [2024-07-26 07:44:21.956398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bad40 (9): Bad file descriptor 00:18:56.418 [2024-07-26 07:44:21.956717] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.418 [2024-07-26 07:44:21.956751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bad40 with addr=10.0.0.2, port=4420 00:18:56.418 [2024-07-26 07:44:21.956763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bad40 is same with the state(5) to be set 00:18:56.418 [2024-07-26 07:44:21.956784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bad40 (9): Bad file descriptor 00:18:56.418 [2024-07-26 07:44:21.956801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:56.418 [2024-07-26 07:44:21.956810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:56.418 [2024-07-26 07:44:21.956821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.418 [2024-07-26 07:44:21.957044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.418 [2024-07-26 07:44:21.957069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.418 07:44:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:58.946 [2024-07-26 07:44:23.957248] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.946 [2024-07-26 07:44:23.957315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bad40 with addr=10.0.0.2, port=4420 00:18:58.946 [2024-07-26 07:44:23.957333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bad40 is same with the state(5) to be set 00:18:58.946 [2024-07-26 07:44:23.957364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bad40 (9): Bad file descriptor 00:18:58.946 [2024-07-26 07:44:23.957384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.946 [2024-07-26 07:44:23.957395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:58.946 [2024-07-26 07:44:23.957407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.946 [2024-07-26 07:44:23.957439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.946 [2024-07-26 07:44:23.957452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.946 07:44:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:58.946 07:44:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:58.946 07:44:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:58.946 07:44:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:58.946 07:44:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:58.946 07:44:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:58.947 07:44:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:58.947 07:44:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:58.947 07:44:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:00.851 [2024-07-26 07:44:25.957625] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.851 [2024-07-26 07:44:25.957692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bad40 with addr=10.0.0.2, port=4420 00:19:00.851 [2024-07-26 07:44:25.957710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bad40 is same with the state(5) to be set 00:19:00.851 [2024-07-26 07:44:25.957737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bad40 (9): Bad file descriptor 00:19:00.851 [2024-07-26 07:44:25.957757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.851 [2024-07-26 07:44:25.957767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:00.851 [2024-07-26 07:44:25.957779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.851 [2024-07-26 07:44:25.957811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:00.851 [2024-07-26 07:44:25.957824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.751 [2024-07-26 07:44:27.957874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.751 [2024-07-26 07:44:27.957961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.751 [2024-07-26 07:44:27.957991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:02.751 [2024-07-26 07:44:27.958004] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:02.751 [2024-07-26 07:44:27.958038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.683 00:19:03.683 Latency(us) 00:19:03.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.683 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:03.683 Verification LBA range: start 0x0 length 0x4000 00:19:03.683 NVMe0n1 : 8.14 1011.04 3.95 15.73 0.00 124721.85 3753.43 7046430.72 00:19:03.683 =================================================================================================================== 00:19:03.683 Total : 1011.04 3.95 15.73 0.00 124721.85 3753.43 7046430.72 00:19:03.683 0 00:19:03.942 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:03.942 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.942 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:04.200 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:04.200 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:04.200 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:04.200 07:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81565 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81547 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81547 ']' 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81547 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81547 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:04.459 killing process with pid 81547 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81547' 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81547 00:19:04.459 Received shutdown signal, test time was about 9.212082 seconds 00:19:04.459 00:19:04.459 Latency(us) 00:19:04.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.459 =================================================================================================================== 00:19:04.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.459 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81547 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.025 [2024-07-26 07:44:30.516407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81687 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81687 /var/tmp/bdevperf.sock 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81687 ']' 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.025 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.026 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.026 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.026 07:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:05.026 [2024-07-26 07:44:30.580898] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:19:05.026 [2024-07-26 07:44:30.580989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81687 ] 00:19:05.284 [2024-07-26 07:44:30.717642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.284 [2024-07-26 07:44:30.836893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.562 [2024-07-26 07:44:30.911411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:06.129 07:44:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.129 07:44:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:06.129 07:44:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:06.387 07:44:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:06.645 NVMe0n1 00:19:06.645 07:44:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.645 07:44:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81710 00:19:06.645 07:44:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:06.645 Running I/O for 10 seconds... 00:19:07.578 07:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.840 [2024-07-26 07:44:33.281463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.281809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.281820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.282050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.282330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.282344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.282356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.282367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.282378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.282388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.840 [2024-07-26 07:44:33.282401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.840 [2024-07-26 07:44:33.282411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.282422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.282431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.282443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.282452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.282577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.282594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.282608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.282621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.282632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.282642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.282780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.282931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.283690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.283987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.284695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.284974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.285044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.285071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.285082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.285092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.285103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.841 [2024-07-26 07:44:33.285113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.841 [2024-07-26 07:44:33.285124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.285134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.285145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.285155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.285166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.285278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.285294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.285304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.285429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.285606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.285720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.285732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.285744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.286977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.286996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.287987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.287998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.288008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.288143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.288266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.288282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.288293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.288402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.288423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.288576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.288838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.288862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.288873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.842 [2024-07-26 07:44:33.289017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.842 [2024-07-26 07:44:33.289032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.289806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.289953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.843 [2024-07-26 07:44:33.290789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.843 [2024-07-26 07:44:33.290809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.290830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.290842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.843 [2024-07-26 07:44:33.291374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.291981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.291993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.292003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.292014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.292024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.292036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.843 [2024-07-26 07:44:33.292045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.292056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9091b0 is same with the state(5) to be set 00:19:07.843 [2024-07-26 07:44:33.292388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:07.843 [2024-07-26 07:44:33.292400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:07.843 [2024-07-26 07:44:33.292410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66960 len:8 PRP1 0x0 PRP2 0x0 00:19:07.843 [2024-07-26 07:44:33.292420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.292592] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9091b0 was disconnected and freed. reset controller. 00:19:07.843 [2024-07-26 07:44:33.292984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.843 [2024-07-26 07:44:33.293012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.293026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.843 [2024-07-26 07:44:33.293035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.293045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.843 [2024-07-26 07:44:33.293054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.843 [2024-07-26 07:44:33.293065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.844 [2024-07-26 07:44:33.293074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.844 [2024-07-26 07:44:33.293083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:07.844 [2024-07-26 07:44:33.293610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.844 [2024-07-26 07:44:33.293648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:07.844 [2024-07-26 07:44:33.293955] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.844 [2024-07-26 07:44:33.293988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x898d40 with addr=10.0.0.2, port=4420 00:19:07.844 [2024-07-26 07:44:33.294001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:07.844 [2024-07-26 07:44:33.294021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:07.844 [2024-07-26 07:44:33.294039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.844 [2024-07-26 07:44:33.294286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.844 [2024-07-26 07:44:33.294312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.844 [2024-07-26 07:44:33.294336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.844 [2024-07-26 07:44:33.294349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.844 07:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:08.777 [2024-07-26 07:44:34.294499] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.777 [2024-07-26 07:44:34.294563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x898d40 with addr=10.0.0.2, port=4420 00:19:08.777 [2024-07-26 07:44:34.294580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:08.777 [2024-07-26 07:44:34.294604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:08.777 [2024-07-26 07:44:34.294625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.777 [2024-07-26 07:44:34.294636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:08.777 [2024-07-26 07:44:34.294649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.777 [2024-07-26 07:44:34.294682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.777 [2024-07-26 07:44:34.294695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.777 07:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.035 [2024-07-26 07:44:34.556982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.035 07:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81710 00:19:09.969 [2024-07-26 07:44:35.312751] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.128 00:19:18.128 Latency(us) 00:19:18.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.128 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.129 Verification LBA range: start 0x0 length 0x4000 00:19:18.129 NVMe0n1 : 10.01 6642.81 25.95 0.00 0.00 19240.64 1802.24 3035150.89 00:19:18.129 =================================================================================================================== 00:19:18.129 Total : 6642.81 25.95 0.00 0.00 19240.64 1802.24 3035150.89 00:19:18.129 0 00:19:18.129 07:44:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81815 00:19:18.129 07:44:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:18.129 07:44:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:18.129 Running I/O for 10 seconds... 00:19:18.129 07:44:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.129 [2024-07-26 07:44:43.433787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.129 [2024-07-26 07:44:43.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.433912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.433924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.433936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.433946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.433958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.433968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.433979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.433989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.434800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.434809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.129 [2024-07-26 07:44:43.435816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.129 [2024-07-26 07:44:43.435828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.435838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.435849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.435858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.436875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.436997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.437803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.437943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.438726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.438735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.130 [2024-07-26 07:44:43.439545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.130 [2024-07-26 07:44:43.439558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.439570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.439580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.439591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.439601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.439612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.439622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.439634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.439643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.439987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.439999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.440756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.440778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.441793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.442858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.442983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.131 [2024-07-26 07:44:43.443099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.443113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.131 [2024-07-26 07:44:43.443123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.443272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.131 [2024-07-26 07:44:43.443376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.443391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.131 [2024-07-26 07:44:43.443401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.131 [2024-07-26 07:44:43.443413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.443905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.443917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.444051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.444154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.444167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.444178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.444188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.444199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.132 [2024-07-26 07:44:43.444209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.444349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.132 [2024-07-26 07:44:43.444463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.444491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x907fb0 is same with the state(5) to be set 00:19:18.132 [2024-07-26 07:44:43.444628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.132 [2024-07-26 07:44:43.444713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.132 [2024-07-26 07:44:43.444726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:19:18.132 [2024-07-26 07:44:43.444736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.444803] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x907fb0 was disconnected and freed. reset controller. 00:19:18.132 [2024-07-26 07:44:43.445221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.132 [2024-07-26 07:44:43.445259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.445272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.132 [2024-07-26 07:44:43.445282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.445293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.132 [2024-07-26 07:44:43.445302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.445312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.132 [2024-07-26 07:44:43.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.132 [2024-07-26 07:44:43.445332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:18.132 [2024-07-26 07:44:43.445799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.132 [2024-07-26 07:44:43.445835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:18.132 [2024-07-26 07:44:43.445939] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.132 [2024-07-26 07:44:43.446072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x898d40 with addr=10.0.0.2, port=4420 00:19:18.132 [2024-07-26 07:44:43.446171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:18.132 [2024-07-26 07:44:43.446198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:18.132 [2024-07-26 07:44:43.446215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.132 [2024-07-26 07:44:43.446226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:18.132 [2024-07-26 07:44:43.446354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.132 [2024-07-26 07:44:43.446594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.132 [2024-07-26 07:44:43.446623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.132 07:44:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:19.064 [2024-07-26 07:44:44.446772] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:19.064 [2024-07-26 07:44:44.446848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x898d40 with addr=10.0.0.2, port=4420 00:19:19.064 [2024-07-26 07:44:44.446866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:19.064 [2024-07-26 07:44:44.446894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:19.064 [2024-07-26 07:44:44.446915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:19.064 [2024-07-26 07:44:44.446925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:19.064 [2024-07-26 07:44:44.446937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.064 [2024-07-26 07:44:44.446967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:19.064 [2024-07-26 07:44:44.446980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.997 [2024-07-26 07:44:45.447104] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:19.997 [2024-07-26 07:44:45.447185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x898d40 with addr=10.0.0.2, port=4420 00:19:19.997 [2024-07-26 07:44:45.447203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:19.997 [2024-07-26 07:44:45.447225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:19.997 [2024-07-26 07:44:45.447245] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:19.998 [2024-07-26 07:44:45.447255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:19.998 [2024-07-26 07:44:45.447266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.998 [2024-07-26 07:44:45.447294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:19.998 [2024-07-26 07:44:45.447306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.930 [2024-07-26 07:44:46.450909] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.930 [2024-07-26 07:44:46.450999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x898d40 with addr=10.0.0.2, port=4420 00:19:20.930 [2024-07-26 07:44:46.451017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x898d40 is same with the state(5) to be set 00:19:20.930 [2024-07-26 07:44:46.451449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898d40 (9): Bad file descriptor 00:19:20.930 [2024-07-26 07:44:46.451881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:20.930 [2024-07-26 07:44:46.451908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:20.930 [2024-07-26 07:44:46.451921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:20.930 [2024-07-26 07:44:46.455955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:20.930 [2024-07-26 07:44:46.456005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.930 07:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.187 [2024-07-26 07:44:46.710071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.187 07:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81815 00:19:22.118 [2024-07-26 07:44:47.492067] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:27.379 00:19:27.379 Latency(us) 00:19:27.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.379 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.379 Verification LBA range: start 0x0 length 0x4000 00:19:27.379 NVMe0n1 : 10.01 5671.38 22.15 3851.40 0.00 13402.65 647.91 3019898.88 00:19:27.379 =================================================================================================================== 00:19:27.379 Total : 5671.38 22.15 3851.40 0.00 13402.65 0.00 3019898.88 00:19:27.379 0 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81687 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81687 ']' 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81687 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81687 00:19:27.379 killing process with pid 81687 00:19:27.379 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.379 00:19:27.379 Latency(us) 00:19:27.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.379 =================================================================================================================== 00:19:27.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81687' 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81687 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81687 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81935 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81935 /var/tmp/bdevperf.sock 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81935 ']' 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.379 07:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:27.379 [2024-07-26 07:44:52.720610] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:19:27.379 [2024-07-26 07:44:52.720706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81935 ] 00:19:27.379 [2024-07-26 07:44:52.858753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.638 [2024-07-26 07:44:52.985790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.638 [2024-07-26 07:44:53.058595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:28.203 07:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.203 07:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:28.203 07:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81950 00:19:28.203 07:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:28.203 07:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:28.461 07:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:28.719 NVMe0n1 00:19:28.719 07:44:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81987 00:19:28.719 07:44:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.719 07:44:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:28.976 Running I/O for 10 seconds... 00:19:29.909 07:44:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.909 [2024-07-26 07:44:55.477795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.477887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.477931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.477943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.477956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.477966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.477978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.477988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.478801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.478812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.909 [2024-07-26 07:44:55.479937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.909 [2024-07-26 07:44:55.479947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.479959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.479969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.479980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.479990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.910 [2024-07-26 07:44:55.480789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.910 [2024-07-26 07:44:55.480801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.480971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.480980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.481979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.481991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.911 [2024-07-26 07:44:55.482212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.911 [2024-07-26 07:44:55.482222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.482456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.482854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.483002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.483331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.483781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.912 [2024-07-26 07:44:55.484223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.484650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e6a0 is same with the state(5) to be set 00:19:29.912 [2024-07-26 07:44:55.484861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:29.912 [2024-07-26 07:44:55.484945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:29.912 [2024-07-26 07:44:55.484958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112568 len:8 PRP1 0x0 PRP2 0x0 00:19:29.912 [2024-07-26 07:44:55.484976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.485047] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x217e6a0 was disconnected and freed. reset controller. 00:19:29.912 [2024-07-26 07:44:55.485160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.912 [2024-07-26 07:44:55.485177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.485190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.912 [2024-07-26 07:44:55.485200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.485210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.912 [2024-07-26 07:44:55.485219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.485230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.912 [2024-07-26 07:44:55.485252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.912 [2024-07-26 07:44:55.485262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212dc00 is same with the state(5) to be set 00:19:29.912 [2024-07-26 07:44:55.485529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.912 [2024-07-26 07:44:55.485556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212dc00 (9): Bad file descriptor 00:19:29.912 [2024-07-26 07:44:55.485668] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.912 [2024-07-26 07:44:55.485690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212dc00 with addr=10.0.0.2, port=4420 00:19:29.912 [2024-07-26 07:44:55.485702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212dc00 is same with the state(5) to be set 00:19:29.912 [2024-07-26 07:44:55.485720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212dc00 (9): Bad file descriptor 00:19:29.912 [2024-07-26 07:44:55.485738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.912 [2024-07-26 07:44:55.485748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:29.912 [2024-07-26 07:44:55.485759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.912 [2024-07-26 07:44:55.485780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.912 [2024-07-26 07:44:55.485790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.912 07:44:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81987 00:19:32.451 [2024-07-26 07:44:57.486109] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:32.451 [2024-07-26 07:44:57.486186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212dc00 with addr=10.0.0.2, port=4420 00:19:32.451 [2024-07-26 07:44:57.486204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212dc00 is same with the state(5) to be set 00:19:32.451 [2024-07-26 07:44:57.486232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212dc00 (9): Bad file descriptor 00:19:32.451 [2024-07-26 07:44:57.486252] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:32.451 [2024-07-26 07:44:57.486263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:32.451 [2024-07-26 07:44:57.486275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:32.451 [2024-07-26 07:44:57.486305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:32.451 [2024-07-26 07:44:57.486318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:34.350 [2024-07-26 07:44:59.486594] uring.c: 663:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:34.350 [2024-07-26 07:44:59.486673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212dc00 with addr=10.0.0.2, port=4420 00:19:34.350 [2024-07-26 07:44:59.486691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212dc00 is same with the state(5) to be set 00:19:34.350 [2024-07-26 07:44:59.486720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212dc00 (9): Bad file descriptor 00:19:34.350 [2024-07-26 07:44:59.486742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:34.350 [2024-07-26 07:44:59.486753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:34.350 [2024-07-26 07:44:59.486765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:34.350 [2024-07-26 07:44:59.486795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:34.350 [2024-07-26 07:44:59.486809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.250 [2024-07-26 07:45:01.486912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.250 [2024-07-26 07:45:01.486964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.250 [2024-07-26 07:45:01.486994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.250 [2024-07-26 07:45:01.487005] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:36.250 [2024-07-26 07:45:01.487035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.183 00:19:37.183 Latency(us) 00:19:37.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.183 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:37.183 NVMe0n1 : 8.15 2213.65 8.65 15.71 0.00 57337.90 7506.85 7046430.72 00:19:37.183 =================================================================================================================== 00:19:37.183 Total : 2213.65 8.65 15.71 0.00 57337.90 7506.85 7046430.72 00:19:37.183 0 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.183 Attaching 5 probes... 00:19:37.183 1280.629255: reset bdev controller NVMe0 00:19:37.183 1280.704766: reconnect bdev controller NVMe0 00:19:37.183 3281.064419: reconnect delay bdev controller NVMe0 00:19:37.183 3281.103250: reconnect bdev controller NVMe0 00:19:37.183 5281.520496: reconnect delay bdev controller NVMe0 00:19:37.183 5281.558946: reconnect bdev controller NVMe0 00:19:37.183 7281.977235: reconnect delay bdev controller NVMe0 00:19:37.183 7282.015677: reconnect bdev controller NVMe0 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81950 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81935 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81935 ']' 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81935 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81935 00:19:37.183 killing process with pid 81935 00:19:37.183 Received shutdown signal, test time was about 8.207355 seconds 00:19:37.183 00:19:37.183 Latency(us) 00:19:37.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.183 =================================================================================================================== 00:19:37.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81935' 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81935 00:19:37.183 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81935 00:19:37.441 07:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.700 rmmod nvme_tcp 00:19:37.700 rmmod nvme_fabrics 00:19:37.700 rmmod nvme_keyring 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81498 ']' 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81498 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81498 ']' 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81498 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81498 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.700 killing process with pid 81498 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81498' 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81498 00:19:37.700 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81498 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.957 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:38.215 ************************************ 00:19:38.215 END TEST nvmf_timeout 00:19:38.215 ************************************ 00:19:38.215 00:19:38.215 real 0m47.185s 00:19:38.215 user 2m17.985s 00:19:38.215 sys 0m5.881s 00:19:38.215 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.215 07:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:38.215 07:45:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:38.215 07:45:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:38.215 00:19:38.215 real 5m5.998s 00:19:38.215 user 13m19.802s 00:19:38.215 sys 1m9.561s 00:19:38.215 07:45:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.215 07:45:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.215 ************************************ 00:19:38.215 END TEST nvmf_host 00:19:38.215 ************************************ 00:19:38.215 00:19:38.215 real 12m7.587s 00:19:38.215 user 29m31.574s 00:19:38.215 sys 3m0.817s 00:19:38.215 07:45:03 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.215 07:45:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:38.215 ************************************ 00:19:38.215 END TEST nvmf_tcp 00:19:38.215 ************************************ 00:19:38.215 07:45:03 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:19:38.215 07:45:03 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:38.215 07:45:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:38.215 07:45:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.215 07:45:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.215 ************************************ 00:19:38.215 START TEST nvmf_dif 00:19:38.215 ************************************ 00:19:38.215 07:45:03 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:38.215 * Looking for test storage... 00:19:38.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:38.215 07:45:03 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:437e2608-a818-4ddb-8068-388d756b599a 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=437e2608-a818-4ddb-8068-388d756b599a 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.215 07:45:03 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.215 07:45:03 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.215 07:45:03 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.215 07:45:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.215 07:45:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.215 07:45:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.215 07:45:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:38.215 07:45:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.215 07:45:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:38.215 07:45:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:38.215 07:45:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:38.215 07:45:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:38.215 07:45:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.215 07:45:03 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.216 07:45:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:38.216 07:45:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:38.216 07:45:03 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:38.473 Cannot find device "nvmf_tgt_br" 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.473 Cannot find device "nvmf_tgt_br2" 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:38.473 Cannot find device "nvmf_tgt_br" 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:38.473 Cannot find device "nvmf_tgt_br2" 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.473 07:45:03 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.474 07:45:03 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.474 07:45:03 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.474 07:45:03 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.474 07:45:03 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.474 07:45:03 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:38.474 07:45:03 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.474 07:45:04 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:38.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:19:38.732 00:19:38.732 --- 10.0.0.2 ping statistics --- 00:19:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.732 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:38.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:38.732 00:19:38.732 --- 10.0.0.3 ping statistics --- 00:19:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.732 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:38.732 00:19:38.732 --- 10.0.0.1 ping statistics --- 00:19:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.732 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:38.732 07:45:04 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:38.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.990 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.990 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:38.990 07:45:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:38.990 07:45:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82423 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82423 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 82423 ']' 00:19:38.990 07:45:04 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.990 07:45:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.990 [2024-07-26 07:45:04.565763] Starting SPDK v24.09-pre git sha1 5c22a76d6 / DPDK 24.03.0 initialization... 00:19:38.990 [2024-07-26 07:45:04.565884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.248 [2024-07-26 07:45:04.702972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.248 [2024-07-26 07:45:04.816975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.248 [2024-07-26 07:45:04.817048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.248 [2024-07-26 07:45:04.817075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.248 [2024-07-26 07:45:04.817083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.248 [2024-07-26 07:45:04.817090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.248 [2024-07-26 07:45:04.817118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.506 [2024-07-26 07:45:04.892063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:19:40.072 07:45:05 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 07:45:05 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.072 07:45:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:40.072 07:45:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 [2024-07-26 07:45:05.602294] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 07:45:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 ************************************ 00:19:40.072 START TEST fio_dif_1_default 00:19:40.072 ************************************ 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 bdev_null0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:40.072 [2024-07-26 07:45:05.646401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:40.072 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.073 { 00:19:40.073 "params": { 00:19:40.073 "name": "Nvme$subsystem", 00:19:40.073 "trtype": "$TEST_TRANSPORT", 00:19:40.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.073 "adrfam": "ipv4", 00:19:40.073 "trsvcid": "$NVMF_PORT", 00:19:40.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.073 "hdgst": ${hdgst:-false}, 00:19:40.073 "ddgst": ${ddgst:-false} 00:19:40.073 }, 00:19:40.073 "method": "bdev_nvme_attach_controller" 00:19:40.073 } 00:19:40.073 EOF 00:19:40.073 )") 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:40.073 07:45:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:40.073 "params": { 00:19:40.073 "name": "Nvme0", 00:19:40.073 "trtype": "tcp", 00:19:40.073 "traddr": "10.0.0.2", 00:19:40.073 "adrfam": "ipv4", 00:19:40.073 "trsvcid": "4420", 00:19:40.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:40.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:40.073 "hdgst": false, 00:19:40.073 "ddgst": false 00:19:40.073 }, 00:19:40.073 "method": "bdev_nvme_attach_controller" 00:19:40.073 }' 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:40.332 07:45:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:40.332 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:40.332 fio-3.35 00:19:40.332 Starting 1 thread 00:19:52.532 00:19:52.532 filename0: (groupid=0, jobs=1): err= 0: pid=82491: Fri Jul 26 07:45:16 2024 00:19:52.532 read: IOPS=9061, BW=35.4MiB/s (37.1MB/s)(354MiB/10001msec) 00:19:52.532 slat (nsec): min=6511, max=51378, avg=8176.75, stdev=3071.70 00:19:52.532 clat (usec): min=353, max=3275, avg=417.48, stdev=38.31 00:19:52.532 lat (usec): min=360, max=3303, avg=425.66, stdev=38.91 00:19:52.532 clat percentiles (usec): 00:19:52.533 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 388], 00:19:52.533 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 00:19:52.533 | 70.00th=[ 433], 80.00th=[ 445], 90.00th=[ 461], 95.00th=[ 474], 00:19:52.533 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 562], 00:19:52.533 | 99.99th=[ 750] 00:19:52.533 bw ( KiB/s): min=35161, max=37504, per=99.99%, avg=36242.16, stdev=611.52, samples=19 00:19:52.533 iops : min= 8790, max= 9376, avg=9060.95, stdev=152.83, samples=19 00:19:52.533 lat (usec) : 500=99.21%, 750=0.78%, 1000=0.01% 00:19:52.533 lat (msec) : 2=0.01%, 4=0.01% 00:19:52.533 cpu : usr=84.52%, sys=13.69%, ctx=15, majf=0, minf=0 00:19:52.533 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.533 issued rwts: total=90624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.533 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:52.533 00:19:52.533 Run status group 0 (all jobs): 00:19:52.533 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=354MiB (371MB), run=10001-10001msec 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 00:19:52.533 real 0m11.105s 00:19:52.533 user 0m9.163s 00:19:52.533 sys 0m1.682s 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 ************************************ 00:19:52.533 END TEST fio_dif_1_default 00:19:52.533 ************************************ 00:19:52.533 07:45:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:52.533 07:45:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:52.533 07:45:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 ************************************ 00:19:52.533 START TEST fio_dif_1_multi_subsystems 00:19:52.533 ************************************ 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 bdev_null0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 [2024-07-26 07:45:16.810476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 bdev_null1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:52.533 { 00:19:52.533 "params": { 00:19:52.533 "name": "Nvme$subsystem", 00:19:52.533 "trtype": "$TEST_TRANSPORT", 00:19:52.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.533 "adrfam": "ipv4", 00:19:52.533 "trsvcid": "$NVMF_PORT", 00:19:52.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.533 "hdgst": ${hdgst:-false}, 00:19:52.533 "ddgst": ${ddgst:-false} 00:19:52.533 }, 00:19:52.533 "method": "bdev_nvme_attach_controller" 00:19:52.533 } 00:19:52.533 EOF 00:19:52.533 )") 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.533 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:52.534 { 00:19:52.534 "params": { 00:19:52.534 "name": "Nvme$subsystem", 00:19:52.534 "trtype": "$TEST_TRANSPORT", 00:19:52.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.534 "adrfam": "ipv4", 00:19:52.534 "trsvcid": "$NVMF_PORT", 00:19:52.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.534 "hdgst": ${hdgst:-false}, 00:19:52.534 "ddgst": ${ddgst:-false} 00:19:52.534 }, 00:19:52.534 "method": "bdev_nvme_attach_controller" 00:19:52.534 } 00:19:52.534 EOF 00:19:52.534 )") 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:52.534 "params": { 00:19:52.534 "name": "Nvme0", 00:19:52.534 "trtype": "tcp", 00:19:52.534 "traddr": "10.0.0.2", 00:19:52.534 "adrfam": "ipv4", 00:19:52.534 "trsvcid": "4420", 00:19:52.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.534 "hdgst": false, 00:19:52.534 "ddgst": false 00:19:52.534 }, 00:19:52.534 "method": "bdev_nvme_attach_controller" 00:19:52.534 },{ 00:19:52.534 "params": { 00:19:52.534 "name": "Nvme1", 00:19:52.534 "trtype": "tcp", 00:19:52.534 "traddr": "10.0.0.2", 00:19:52.534 "adrfam": "ipv4", 00:19:52.534 "trsvcid": "4420", 00:19:52.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.534 "hdgst": false, 00:19:52.534 "ddgst": false 00:19:52.534 }, 00:19:52.534 "method": "bdev_nvme_attach_controller" 00:19:52.534 }' 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.534 07:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.534 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:52.534 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:52.534 fio-3.35 00:19:52.534 Starting 2 threads 00:20:02.551 00:20:02.551 filename0: (groupid=0, jobs=1): err= 0: pid=82650: Fri Jul 26 07:45:27 2024 00:20:02.551 read: IOPS=4905, BW=19.2MiB/s (20.1MB/s)(192MiB/10001msec) 00:20:02.551 slat (nsec): min=6680, max=57590, avg=13248.19, stdev=4139.39 00:20:02.551 clat (usec): min=371, max=4078, avg=778.95, stdev=48.90 00:20:02.551 lat (usec): min=378, max=4105, avg=792.20, stdev=49.12 00:20:02.551 clat percentiles (usec): 00:20:02.551 | 1.00th=[ 693], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 742], 00:20:02.551 | 30.00th=[ 758], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 791], 00:20:02.551 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 840], 00:20:02.551 | 99.00th=[ 865], 99.50th=[ 873], 99.90th=[ 898], 99.95th=[ 914], 00:20:02.551 | 99.99th=[ 971] 00:20:02.551 bw ( KiB/s): min=19360, max=19936, per=50.01%, avg=19623.11, stdev=186.56, samples=19 00:20:02.551 iops : min= 4840, max= 4984, avg=4905.74, stdev=46.66, samples=19 00:20:02.551 lat (usec) : 500=0.02%, 750=24.03%, 1000=75.95% 00:20:02.551 lat (msec) : 10=0.01% 00:20:02.551 cpu : usr=89.92%, sys=8.75%, ctx=17, majf=0, minf=0 00:20:02.551 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.551 issued rwts: total=49060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.551 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:02.551 filename1: (groupid=0, jobs=1): err= 0: pid=82651: Fri Jul 26 07:45:27 2024 00:20:02.551 read: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(192MiB/10001msec) 00:20:02.551 slat (nsec): min=6824, max=62252, avg=13095.92, stdev=4073.30 00:20:02.551 clat (usec): min=570, max=5350, avg=780.14, stdev=61.84 00:20:02.551 lat (usec): min=585, max=5372, avg=793.24, stdev=62.58 00:20:02.551 clat percentiles (usec): 00:20:02.551 | 1.00th=[ 668], 5.00th=[ 701], 10.00th=[ 717], 20.00th=[ 742], 00:20:02.551 | 30.00th=[ 758], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 791], 00:20:02.551 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 857], 00:20:02.551 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 922], 00:20:02.551 | 99.99th=[ 955] 00:20:02.551 bw ( KiB/s): min=19360, max=19936, per=50.00%, avg=19621.05, stdev=187.89, samples=19 00:20:02.551 iops : min= 4840, max= 4984, avg=4905.26, stdev=46.97, samples=19 00:20:02.551 lat (usec) : 750=26.42%, 1000=73.57% 00:20:02.551 lat (msec) : 10=0.01% 00:20:02.551 cpu : usr=89.43%, sys=9.19%, ctx=18, majf=0, minf=9 00:20:02.551 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.551 issued rwts: total=49049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.551 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:02.551 00:20:02.551 Run status group 0 (all jobs): 00:20:02.551 READ: bw=38.3MiB/s (40.2MB/s), 19.2MiB/s-19.2MiB/s (20.1MB/s-20.1MB/s), io=383MiB (402MB), run=10001-10001msec 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.551 00:20:02.551 real 0m11.221s 00:20:02.551 user 0m18.732s 00:20:02.551 sys 0m2.118s 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.551 ************************************ 00:20:02.551 END TEST fio_dif_1_multi_subsystems 00:20:02.551 07:45:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 ************************************ 00:20:02.551 07:45:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:02.551 07:45:28 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:02.551 07:45:28 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.551 07:45:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 ************************************ 00:20:02.551 START TEST fio_dif_rand_params 00:20:02.552 ************************************ 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 bdev_null0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 [2024-07-26 07:45:28.083523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.552 { 00:20:02.552 "params": { 00:20:02.552 "name": "Nvme$subsystem", 00:20:02.552 "trtype": "$TEST_TRANSPORT", 00:20:02.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.552 "adrfam": "ipv4", 00:20:02.552 "trsvcid": "$NVMF_PORT", 00:20:02.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.552 "hdgst": ${hdgst:-false}, 00:20:02.552 "ddgst": ${ddgst:-false} 00:20:02.552 }, 00:20:02.552 "method": "bdev_nvme_attach_controller" 00:20:02.552 } 00:20:02.552 EOF 00:20:02.552 )") 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:02.552 "params": { 00:20:02.552 "name": "Nvme0", 00:20:02.552 "trtype": "tcp", 00:20:02.552 "traddr": "10.0.0.2", 00:20:02.552 "adrfam": "ipv4", 00:20:02.552 "trsvcid": "4420", 00:20:02.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:02.552 "hdgst": false, 00:20:02.552 "ddgst": false 00:20:02.552 }, 00:20:02.552 "method": "bdev_nvme_attach_controller" 00:20:02.552 }' 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.552 07:45:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:02.811 ... 00:20:02.811 fio-3.35 00:20:02.811 Starting 3 threads 00:20:09.372 00:20:09.372 filename0: (groupid=0, jobs=1): err= 0: pid=82807: Fri Jul 26 07:45:33 2024 00:20:09.372 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(162MiB/5001msec) 00:20:09.372 slat (nsec): min=7065, max=49105, avg=9667.33, stdev=3384.16 00:20:09.372 clat (usec): min=10390, max=11800, avg=11524.45, stdev=118.16 00:20:09.372 lat (usec): min=10399, max=11812, avg=11534.12, stdev=118.25 00:20:09.372 clat percentiles (usec): 00:20:09.372 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:09.372 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:09.372 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:20:09.372 | 99.00th=[11731], 99.50th=[11731], 99.90th=[11731], 99.95th=[11863], 00:20:09.372 | 99.99th=[11863] 00:20:09.372 bw ( KiB/s): min=33024, max=33792, per=33.37%, avg=33280.00, stdev=384.00, samples=9 00:20:09.372 iops : min= 258, max= 264, avg=260.00, stdev= 3.00, samples=9 00:20:09.372 lat (msec) : 20=100.00% 00:20:09.372 cpu : usr=90.76%, sys=8.66%, ctx=13, majf=0, minf=0 00:20:09.372 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.372 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:09.372 filename0: (groupid=0, jobs=1): err= 0: pid=82808: Fri Jul 26 07:45:33 2024 00:20:09.372 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5005msec) 00:20:09.372 slat (nsec): min=7202, max=60281, avg=10078.24, stdev=4156.06 00:20:09.372 clat (usec): min=4742, max=13873, avg=11505.92, stdev=412.57 00:20:09.372 lat (usec): min=4749, max=13898, avg=11516.00, stdev=412.74 00:20:09.372 clat percentiles (usec): 00:20:09.372 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:09.372 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:09.372 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:20:09.372 | 99.00th=[11731], 99.50th=[11731], 99.90th=[13829], 99.95th=[13829], 00:20:09.372 | 99.99th=[13829] 00:20:09.372 bw ( KiB/s): min=32320, max=33792, per=33.29%, avg=33201.78, stdev=497.57, samples=9 00:20:09.372 iops : min= 252, max= 264, avg=259.33, stdev= 4.00, samples=9 00:20:09.372 lat (msec) : 10=0.46%, 20=99.54% 00:20:09.372 cpu : usr=90.89%, sys=8.55%, ctx=6, majf=0, minf=9 00:20:09.372 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.372 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:09.372 filename0: (groupid=0, jobs=1): err= 0: pid=82809: Fri Jul 26 07:45:33 2024 00:20:09.372 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(162MiB/5001msec) 00:20:09.372 slat (nsec): min=7093, max=37247, avg=9809.43, stdev=3543.10 00:20:09.372 clat (usec): min=9643, max=12097, avg=11523.61, stdev=140.20 00:20:09.372 lat (usec): min=9651, max=12123, avg=11533.42, stdev=140.38 00:20:09.372 clat percentiles (usec): 00:20:09.372 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:09.372 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:09.372 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:20:09.372 | 99.00th=[11731], 99.50th=[11731], 99.90th=[12125], 99.95th=[12125], 00:20:09.372 | 99.99th=[12125] 00:20:09.372 bw ( KiB/s): min=33024, max=33792, per=33.37%, avg=33280.00, stdev=384.00, samples=9 00:20:09.372 iops : min= 258, max= 264, avg=260.00, stdev= 3.00, samples=9 00:20:09.372 lat (msec) : 10=0.23%, 20=99.77% 00:20:09.372 cpu : usr=90.64%, sys=8.80%, ctx=4, majf=0, minf=9 00:20:09.372 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.372 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:09.372 00:20:09.372 Run status group 0 (all jobs): 00:20:09.372 READ: bw=97.4MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.1MB/s), io=488MiB (511MB), run=5001-5005msec 00:20:09.372 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 bdev_null0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 [2024-07-26 07:45:34.195589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 bdev_null1 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 bdev_null2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.373 { 00:20:09.373 "params": { 00:20:09.373 "name": "Nvme$subsystem", 00:20:09.373 "trtype": "$TEST_TRANSPORT", 00:20:09.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.373 "adrfam": "ipv4", 00:20:09.373 "trsvcid": "$NVMF_PORT", 00:20:09.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.373 "hdgst": ${hdgst:-false}, 00:20:09.373 "ddgst": ${ddgst:-false} 00:20:09.373 }, 00:20:09.373 "method": "bdev_nvme_attach_controller" 00:20:09.373 } 00:20:09.373 EOF 00:20:09.373 )") 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.373 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.373 { 00:20:09.373 "params": { 00:20:09.373 "name": "Nvme$subsystem", 00:20:09.374 "trtype": "$TEST_TRANSPORT", 00:20:09.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.374 "adrfam": "ipv4", 00:20:09.374 "trsvcid": "$NVMF_PORT", 00:20:09.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.374 "hdgst": ${hdgst:-false}, 00:20:09.374 "ddgst": ${ddgst:-false} 00:20:09.374 }, 00:20:09.374 "method": "bdev_nvme_attach_controller" 00:20:09.374 } 00:20:09.374 EOF 00:20:09.374 )") 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.374 { 00:20:09.374 "params": { 00:20:09.374 "name": "Nvme$subsystem", 00:20:09.374 "trtype": "$TEST_TRANSPORT", 00:20:09.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.374 "adrfam": "ipv4", 00:20:09.374 "trsvcid": "$NVMF_PORT", 00:20:09.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.374 "hdgst": ${hdgst:-false}, 00:20:09.374 "ddgst": ${ddgst:-false} 00:20:09.374 }, 00:20:09.374 "method": "bdev_nvme_attach_controller" 00:20:09.374 } 00:20:09.374 EOF 00:20:09.374 )") 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.374 "params": { 00:20:09.374 "name": "Nvme0", 00:20:09.374 "trtype": "tcp", 00:20:09.374 "traddr": "10.0.0.2", 00:20:09.374 "adrfam": "ipv4", 00:20:09.374 "trsvcid": "4420", 00:20:09.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.374 "hdgst": false, 00:20:09.374 "ddgst": false 00:20:09.374 }, 00:20:09.374 "method": "bdev_nvme_attach_controller" 00:20:09.374 },{ 00:20:09.374 "params": { 00:20:09.374 "name": "Nvme1", 00:20:09.374 "trtype": "tcp", 00:20:09.374 "traddr": "10.0.0.2", 00:20:09.374 "adrfam": "ipv4", 00:20:09.374 "trsvcid": "4420", 00:20:09.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.374 "hdgst": false, 00:20:09.374 "ddgst": false 00:20:09.374 }, 00:20:09.374 "method": "bdev_nvme_attach_controller" 00:20:09.374 },{ 00:20:09.374 "params": { 00:20:09.374 "name": "Nvme2", 00:20:09.374 "trtype": "tcp", 00:20:09.374 "traddr": "10.0.0.2", 00:20:09.374 "adrfam": "ipv4", 00:20:09.374 "trsvcid": "4420", 00:20:09.374 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:09.374 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:09.374 "hdgst": false, 00:20:09.374 "ddgst": false 00:20:09.374 }, 00:20:09.374 "method": "bdev_nvme_attach_controller" 00:20:09.374 }' 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:09.374 07:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.374 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:09.374 ... 00:20:09.374 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:09.374 ... 00:20:09.374 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:09.374 ... 00:20:09.374 fio-3.35 00:20:09.374 Starting 24 threads 00:20:21.576 fio: pid=82916, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.576 [2024-07-26 07:45:46.729920] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dd860 via correct icresp 00:20:21.576 [2024-07-26 07:45:46.730004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dd860 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=53186560, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=34996224, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=37572608, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=19476480, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=27541504, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=44208128, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=36532224, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=38891520, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=913408, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=58646528, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=18313216, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=24666112, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=58052608, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=48078848, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=4517888, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=50819072, buflen=4096 00:20:21.576 [2024-07-26 07:45:46.757815] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dd2c0 via correct icresp 00:20:21.576 [2024-07-26 07:45:46.757855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dd2c0 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=45527040, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=31428608, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=38137856, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=48013312, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=19353600, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=60690432, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=5431296, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=40865792, buflen=4096 00:20:21.576 fio: pid=82912, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=28303360, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=54108160, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=45944832, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=36769792, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=19599360, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=9601024, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=6950912, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=58298368, buflen=4096 00:20:21.576 [2024-07-26 07:45:46.762834] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dda40 via correct icresp 00:20:21.576 [2024-07-26 07:45:46.762872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dda40 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=2277376, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=33816576, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=15503360, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=18022400, buflen=4096 00:20:21.576 fio: pid=82915, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=35905536, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=64307200, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=35549184, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=39657472, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=52142080, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=34988032, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=55037952, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=55644160, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=54173696, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=22335488, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=62726144, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=58773504, buflen=4096 00:20:21.576 [2024-07-26 07:45:46.771845] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16ddc20 via correct icresp 00:20:21.576 [2024-07-26 07:45:46.771882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16ddc20 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=21385216, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=24121344, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=65490944, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=54857728, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=65241088, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=26177536, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=14835712, buflen=4096 00:20:21.576 fio: pid=82917, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=1720320, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=40833024, buflen=4096 00:20:21.576 fio: io_u error on file Nvme1n1: Input/output error: read offset=2764800, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=59269120, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=65015808, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=8192, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=37806080, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=15462400, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=17604608, buflen=4096 00:20:21.577 [2024-07-26 07:45:46.773870] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dc000 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.773908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dc000 00:20:21.577 fio: pid=82921, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=19333120, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=35749888, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=24522752, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=45363200, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=46178304, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=12845056, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=48640000, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=48943104, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=19562496, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=10452992, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=7839744, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=52715520, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=51306496, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=22798336, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=41910272, buflen=4096 00:20:21.577 [2024-07-26 07:45:46.798841] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dd4a0 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.799078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dd4a0 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=3481600, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=56184832, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=66342912, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=24215552, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=43143168, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=66883584, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=51359744, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=10039296, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=26849280, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=11550720, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=26439680, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=22265856, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=16035840, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=34263040, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=64954368, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=65003520, buflen=4096 00:20:21.577 fio: pid=82922, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.577 [2024-07-26 07:45:46.809023] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dde00 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.809304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dde00 00:20:21.577 [2024-07-26 07:45:46.809032] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x16dd680 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.809549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16dd680 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:20:21.577 fio: pid=82908, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:20:21.577 fio: pid=82918, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:20:21.577 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:20:21.577 fio: pid=82920, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.577 [2024-07-26 07:45:46.826819] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0a3c0 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.826854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0a3c0 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:20:21.577 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:20:21.577 [2024-07-26 07:45:46.831053] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0a000 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.831068] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0a1e0 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.831094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0a000 00:20:21.577 fio: pid=82906, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.577 [2024-07-26 07:45:46.831240] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0a5a0 via correct icresp 00:20:21.577 [2024-07-26 07:45:46.831282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0a5a0 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:20:21.577 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:20:21.578 [2024-07-26 07:45:46.831819] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0ad20 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.831826] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0a960 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.831873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0ad20 00:20:21.578 [2024-07-26 07:45:46.831890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0a960 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:20:21.578 fio: pid=82913, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:20:21.578 fio: pid=82911, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=26419200, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=56233984, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=7028736, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=57393152, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=66609152, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=49180672, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=20025344, buflen=4096 00:20:21.578 fio: pid=82910, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=44130304, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=23453696, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=34766848, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=36364288, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=32235520, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=32780288, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=54685696, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=13651968, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=10285056, buflen=4096 00:20:21.578 [2024-07-26 07:45:46.833313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0a1e0 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:20:21.578 fio: pid=82924, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:20:21.578 [2024-07-26 07:45:46.834200] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0af00 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.834233] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0b0e0 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.834235] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0a780 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.834237] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0b2c0 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.834329] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0b4a0 via correct icresp 00:20:21.578 fio: pid=82905, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.578 [2024-07-26 07:45:46.834239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0af00 00:20:21.578 [2024-07-26 07:45:46.834406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0b0e0 00:20:21.578 [2024-07-26 07:45:46.834443] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2d0ab40 via correct icresp 00:20:21.578 [2024-07-26 07:45:46.834491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0b2c0 00:20:21.578 [2024-07-26 07:45:46.834581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0ab40 00:20:21.578 [2024-07-26 07:45:46.834663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0a780 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=15663104, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=9408512, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=15581184, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=27107328, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=55336960, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=39804928, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=46354432, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=34320384, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=9441280, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=26243072, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=39927808, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=55951360, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=46231552, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=23080960, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=37081088, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=33689600, buflen=4096 00:20:21.578 [2024-07-26 07:45:46.834949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc780 (9): Bad file descriptor 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=39387136, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=47931392, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=21307392, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=58040320, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=46010368, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=42749952, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=35491840, buflen=4096 00:20:21.578 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:20:21.578 fio: io_u error on file Nvme1n1: Input/output error: read offset=57552896, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:20:21.578 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=17387520, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=37646336, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=42897408, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:20:21.579 fio: pid=82919, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.579 fio: pid=82926, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.579 fio: pid=82907, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=28008448, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=42741760, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=2555904, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=34897920, buflen=4096 00:20:21.579 fio: io_u error on file Nvme1n1: Input/output error: read offset=50384896, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:20:21.579 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:20:21.579 [2024-07-26 07:45:46.835393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2d0b4a0 00:20:21.579 fio: pid=82927, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:20:21.579 [2024-07-26 07:45:46.836029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc5a0 (9): Bad file descriptor 00:20:21.579 fio: pid=82923, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=2039808, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=37330944, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=58757120, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=49201152, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=32235520, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=66932736, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=6856704, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=49119232, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=14749696, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=32546816, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=36941824, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=29216768, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=45797376, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=45506560, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=6017024, buflen=4096 00:20:21.579 fio: io_u error on file Nvme2n1: Input/output error: read offset=22528000, buflen=4096 00:20:21.579 [2024-07-26 07:45:46.836325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc960 (9): Bad file descriptor 00:20:21.579 00:20:21.579 filename0: (groupid=0, jobs=1): err= 0: pid=82904: Fri Jul 26 07:45:46 2024 00:20:21.579 read: IOPS=1100, BW=4401KiB/s (4507kB/s)(43.0MiB/10011msec) 00:20:21.579 slat (usec): min=7, max=8022, avg=15.33, stdev=173.06 00:20:21.579 clat (usec): min=434, max=37979, avg=14397.31, stdev=5979.12 00:20:21.579 lat (usec): min=443, max=37991, avg=14412.64, stdev=5981.96 00:20:21.579 clat percentiles (usec): 00:20:21.579 | 1.00th=[ 1795], 5.00th=[ 3916], 10.00th=[ 7373], 20.00th=[ 9372], 00:20:21.579 | 30.00th=[11731], 40.00th=[12125], 50.00th=[13698], 60.00th=[15008], 00:20:21.579 | 70.00th=[16909], 80.00th=[20841], 90.00th=[22938], 95.00th=[23987], 00:20:21.579 | 99.00th=[28443], 99.50th=[29492], 99.90th=[32900], 99.95th=[35390], 00:20:21.579 | 99.99th=[38011] 00:20:21.579 bw ( KiB/s): min= 3312, max= 7552, per=25.11%, avg=4378.53, stdev=1093.60, samples=19 00:20:21.579 iops : min= 828, max= 1888, avg=1094.63, stdev=273.40, samples=19 00:20:21.579 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.12% 00:20:21.579 lat (msec) : 2=1.26%, 4=3.66%, 10=17.68%, 20=54.11%, 50=23.08% 00:20:21.579 cpu : usr=38.67%, sys=3.87%, ctx=1303, majf=0, minf=9 00:20:21.579 IO depths : 1=2.1%, 2=8.0%, 4=24.0%, 8=55.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:20:21.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 issued rwts: total=11015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.579 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82905: Fri Jul 26 07:45:46 2024 00:20:21.579 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.579 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.579 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82906: Fri Jul 26 07:45:46 2024 00:20:21.579 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.579 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.579 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82907: Fri Jul 26 07:45:46 2024 00:20:21.579 cpu : usr=0.00%, sys=0.00%, ctx=6, majf=0, minf=0 00:20:21.579 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.579 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82908: Fri Jul 26 07:45:46 2024 00:20:21.579 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.579 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.579 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.579 filename0: (groupid=0, jobs=1): err= 0: pid=82909: Fri Jul 26 07:45:46 2024 00:20:21.579 read: IOPS=1057, BW=4230KiB/s (4332kB/s)(41.3MiB/10003msec) 00:20:21.579 slat (usec): min=4, max=8022, avg=12.34, stdev=116.84 00:20:21.579 clat (usec): min=408, max=37158, avg=15018.22, stdev=6331.54 00:20:21.579 lat (usec): min=416, max=37174, avg=15030.56, stdev=6330.46 00:20:21.579 clat percentiles (usec): 00:20:21.579 | 1.00th=[ 1663], 5.00th=[ 2507], 10.00th=[ 9372], 20.00th=[11207], 00:20:21.580 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12911], 60.00th=[13829], 00:20:21.580 | 70.00th=[20579], 80.00th=[22938], 90.00th=[23987], 95.00th=[23987], 00:20:21.580 | 99.00th=[25035], 99.50th=[26870], 99.90th=[35914], 99.95th=[35914], 00:20:21.580 | 99.99th=[36963] 00:20:21.580 bw ( KiB/s): min= 3184, max= 6624, per=23.00%, avg=4011.21, stdev=911.97, samples=19 00:20:21.580 iops : min= 796, max= 1656, avg=1002.79, stdev=227.97, samples=19 00:20:21.580 lat (usec) : 500=0.10%, 750=0.02%, 1000=0.05% 00:20:21.580 lat (msec) : 2=3.15%, 4=3.34%, 10=6.17%, 20=56.02%, 50=31.16% 00:20:21.580 cpu : usr=30.22%, sys=2.95%, ctx=908, majf=0, minf=9 00:20:21.580 IO depths : 1=2.6%, 2=8.6%, 4=24.3%, 8=54.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=10579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82910: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82911: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82912: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82913: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 0: pid=82914: Fri Jul 26 07:45:46 2024 00:20:21.580 read: IOPS=1098, BW=4394KiB/s (4500kB/s)(43.0MiB/10010msec) 00:20:21.580 slat (usec): min=7, max=8031, avg=15.13, stdev=165.67 00:20:21.580 clat (usec): min=734, max=38674, avg=14453.53, stdev=6097.46 00:20:21.580 lat (usec): min=743, max=38682, avg=14468.66, stdev=6097.85 00:20:21.580 clat percentiles (usec): 00:20:21.580 | 1.00th=[ 1893], 5.00th=[ 4621], 10.00th=[ 7177], 20.00th=[ 8455], 00:20:21.580 | 30.00th=[10814], 40.00th=[13042], 50.00th=[14353], 60.00th=[15664], 00:20:21.580 | 70.00th=[17171], 80.00th=[20579], 90.00th=[22938], 95.00th=[23987], 00:20:21.580 | 99.00th=[27919], 99.50th=[30802], 99.90th=[33424], 99.95th=[35914], 00:20:21.580 | 99.99th=[38536] 00:20:21.580 bw ( KiB/s): min= 3312, max= 7392, per=24.87%, avg=4336.42, stdev=1153.95, samples=19 00:20:21.580 iops : min= 828, max= 1848, avg=1084.11, stdev=288.49, samples=19 00:20:21.580 lat (usec) : 750=0.02%, 1000=0.09% 00:20:21.580 lat (msec) : 2=1.32%, 4=2.47%, 10=23.51%, 20=51.09%, 50=21.51% 00:20:21.580 cpu : usr=41.67%, sys=4.54%, ctx=1397, majf=0, minf=10 00:20:21.580 IO depths : 1=2.3%, 2=8.4%, 4=24.5%, 8=54.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=10997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82915: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82916: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=1 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82917: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82918: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=13, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82919: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=4, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82920: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82921: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82922: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=16, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82923: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82924: Fri Jul 26 07:45:46 2024 00:20:21.580 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.580 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.580 filename2: (groupid=0, jobs=1): err= 0: pid=82925: Fri Jul 26 07:45:46 2024 00:20:21.580 read: IOPS=1104, BW=4418KiB/s (4524kB/s)(43.2MiB/10004msec) 00:20:21.580 slat (usec): min=5, max=8022, avg=16.36, stdev=179.45 00:20:21.580 clat (usec): min=438, max=36854, avg=14356.00, stdev=6074.08 00:20:21.581 lat (usec): min=447, max=36863, avg=14372.36, stdev=6077.77 00:20:21.581 clat percentiles (usec): 00:20:21.581 | 1.00th=[ 1762], 5.00th=[ 4555], 10.00th=[ 6980], 20.00th=[ 9634], 00:20:21.581 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13435], 60.00th=[15533], 00:20:21.581 | 70.00th=[17171], 80.00th=[20841], 90.00th=[23462], 95.00th=[23987], 00:20:21.581 | 99.00th=[27132], 99.50th=[30802], 99.90th=[35914], 99.95th=[35914], 00:20:21.581 | 99.99th=[36963] 00:20:21.581 bw ( KiB/s): min= 3312, max= 6679, per=25.22%, avg=4398.16, stdev=896.40, samples=19 00:20:21.581 iops : min= 828, max= 1669, avg=1099.47, stdev=223.99, samples=19 00:20:21.581 lat (usec) : 500=0.04%, 750=0.04%, 1000=0.20% 00:20:21.581 lat (msec) : 2=2.01%, 4=2.39%, 10=17.86%, 20=55.49%, 50=21.98% 00:20:21.581 cpu : usr=37.90%, sys=3.73%, ctx=1264, majf=0, minf=9 00:20:21.581 IO depths : 1=2.2%, 2=7.8%, 4=22.8%, 8=56.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:20:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.581 complete : 0=0.0%, 4=93.7%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.581 issued rwts: total=11050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.581 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82926: Fri Jul 26 07:45:46 2024 00:20:21.581 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=0 00:20:21.581 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.581 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.581 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.581 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82927: Fri Jul 26 07:45:46 2024 00:20:21.581 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:20:21.581 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:20:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.581 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.581 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:21.581 00:20:21.581 Run status group 0 (all jobs): 00:20:21.581 READ: bw=17.0MiB/s (17.9MB/s), 4230KiB/s-4418KiB/s (4332kB/s-4524kB/s), io=170MiB (179MB), run=10003-10011msec 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # trap - ERR 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # print_backtrace 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1155 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1155 -- # local args 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:21.581 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:21.581 ========== Backtrace start: ========== 00:20:21.581 00:20:21.581 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1352 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:20:21.581 ... 00:20:21.581 1347 break 00:20:21.581 1348 fi 00:20:21.581 1349 done 00:20:21.581 1350 00:20:21.581 1351 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:20:21.581 1352 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:20:21.581 1353 } 00:20:21.581 1354 00:20:21.581 1355 function fio_bdev() { 00:20:21.581 1356 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:20:21.581 1357 } 00:20:21.581 ... 00:20:21.581 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1356 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:20:21.581 ... 00:20:21.581 1351 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:20:21.581 1352 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:20:21.581 1353 } 00:20:21.581 1354 00:20:21.581 1355 function fio_bdev() { 00:20:21.581 1356 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:20:21.581 1357 } 00:20:21.581 1358 00:20:21.581 1359 function fio_nvme() { 00:20:21.581 1360 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:20:21.581 1361 } 00:20:21.581 ... 00:20:21.839 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:20:21.839 ... 00:20:21.839 77 FIO 00:20:21.839 78 done 00:20:21.839 79 } 00:20:21.839 80 00:20:21.839 81 fio() { 00:20:21.839 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:20:21.839 83 } 00:20:21.839 84 00:20:21.840 85 fio_dif_1() { 00:20:21.840 86 create_subsystems 0 00:20:21.840 87 fio <(create_json_sub_conf 0) 00:20:21.840 ... 00:20:21.840 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:20:21.840 ... 00:20:21.840 107 destroy_subsystems 0 00:20:21.840 108 00:20:21.840 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:20:21.840 110 00:20:21.840 111 create_subsystems 0 1 2 00:20:21.840 => 112 fio <(create_json_sub_conf 0 1 2) 00:20:21.840 113 destroy_subsystems 0 1 2 00:20:21.840 114 00:20:21.840 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:20:21.840 116 00:20:21.840 117 create_subsystems 0 1 00:20:21.840 ... 00:20:21.840 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:20:21.840 ... 00:20:21.840 1120 timing_enter $test_name 00:20:21.840 1121 echo "************************************" 00:20:21.840 1122 echo "START TEST $test_name" 00:20:21.840 1123 echo "************************************" 00:20:21.840 1124 xtrace_restore 00:20:21.840 1125 time "$@" 00:20:21.840 1126 xtrace_disable 00:20:21.840 1127 echo "************************************" 00:20:21.840 1128 echo "END TEST $test_name" 00:20:21.840 1129 echo "************************************" 00:20:21.840 1130 timing_exit $test_name 00:20:21.840 ... 00:20:21.840 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:20:21.840 ... 00:20:21.840 138 00:20:21.840 139 create_transport 00:20:21.840 140 00:20:21.840 141 run_test "fio_dif_1_default" fio_dif_1 00:20:21.840 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:20:21.840 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:20:21.840 144 run_test "fio_dif_digest" fio_dif_digest 00:20:21.840 145 00:20:21.840 146 trap - SIGINT SIGTERM EXIT 00:20:21.840 147 nvmftestfini 00:20:21.840 ... 00:20:21.840 00:20:21.840 ========== Backtrace end ========== 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1194 -- # return 0 00:20:21.840 00:20:21.840 real 0m19.152s 00:20:21.840 user 2m3.202s 00:20:21.840 sys 0m3.575s 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # process_shm --id 0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@808 -- # type=--id 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@809 -- # id=0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:21.840 nvmf_trace.0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@823 -- # return 0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # nvmftestfini 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@117 -- # sync 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@120 -- # set +e 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.840 rmmod nvme_tcp 00:20:21.840 rmmod nvme_fabrics 00:20:21.840 rmmod nvme_keyring 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@124 -- # set -e 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@125 -- # return 0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@489 -- # '[' -n 82423 ']' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@490 -- # killprocess 82423 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@950 -- # '[' -z 82423 ']' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@954 -- # kill -0 82423 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@955 -- # uname 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82423 00:20:21.840 killing process with pid 82423 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82423' 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@969 -- # kill 82423 00:20:21.840 07:45:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@974 -- # wait 82423 00:20:22.098 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:22.098 07:45:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:22.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.356 Waiting for block devices as requested 00:20:22.618 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:22.618 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.618 07:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1125 -- # trap - ERR 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1125 -- # print_backtrace 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1155 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1155 -- # local args 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:22.618 ========== Backtrace start: ========== 00:20:22.618 00:20:22.618 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:20:22.618 ... 00:20:22.618 1120 timing_enter $test_name 00:20:22.618 1121 echo "************************************" 00:20:22.618 1122 echo "START TEST $test_name" 00:20:22.618 1123 echo "************************************" 00:20:22.618 1124 xtrace_restore 00:20:22.618 1125 time "$@" 00:20:22.618 1126 xtrace_disable 00:20:22.618 1127 echo "************************************" 00:20:22.618 1128 echo "END TEST $test_name" 00:20:22.618 1129 echo "************************************" 00:20:22.618 1130 timing_exit $test_name 00:20:22.618 ... 00:20:22.618 in /home/vagrant/spdk_repo/spdk/autotest.sh:296 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:20:22.618 ... 00:20:22.618 291 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:22.618 292 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:20:22.618 293 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:22.618 294 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:22.618 295 fi 00:20:22.618 => 296 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:20:22.618 297 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:20:22.618 298 # The keyring tests utilize NVMe/TLS 00:20:22.618 299 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:20:22.618 300 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:20:22.618 301 run_test "keyring_linux" "$rootdir/test/keyring/linux.sh" 00:20:22.618 ... 00:20:22.618 00:20:22.618 ========== Backtrace end ========== 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1194 -- # return 0 00:20:22.618 00:20:22.618 real 0m44.512s 00:20:22.618 user 3m3.963s 00:20:22.618 sys 0m11.644s 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1 -- # autotest_cleanup 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1392 -- # local autotest_es=20 00:20:22.618 07:45:48 nvmf_dif -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:22.619 07:45:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.815 INFO: APP EXITING 00:20:34.815 INFO: killing all VMs 00:20:34.815 INFO: killing vhost app 00:20:34.815 INFO: EXIT DONE 00:20:35.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.074 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:35.074 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:35.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.641 Cleaning 00:20:35.641 Removing: /var/run/dpdk/spdk0/config 00:20:35.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:35.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:35.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:35.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:35.641 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:35.641 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:35.641 Removing: /var/run/dpdk/spdk1/config 00:20:35.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:35.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:35.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:35.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:35.641 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:35.900 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:35.900 Removing: /var/run/dpdk/spdk2/config 00:20:35.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:35.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:35.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:35.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:35.900 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:35.900 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:35.900 Removing: /var/run/dpdk/spdk3/config 00:20:35.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:35.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:35.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:35.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:35.900 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:35.900 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:35.900 Removing: /var/run/dpdk/spdk4/config 00:20:35.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:35.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:35.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:35.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:35.900 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:35.900 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:35.900 Removing: /dev/shm/nvmf_trace.0 00:20:35.900 Removing: /dev/shm/spdk_tgt_trace.pid58879 00:20:35.900 Removing: /var/run/dpdk/spdk0 00:20:35.900 Removing: /var/run/dpdk/spdk1 00:20:35.900 Removing: /var/run/dpdk/spdk2 00:20:35.900 Removing: /var/run/dpdk/spdk3 00:20:35.900 Removing: /var/run/dpdk/spdk4 00:20:35.900 Removing: /var/run/dpdk/spdk_pid58728 00:20:35.900 Removing: /var/run/dpdk/spdk_pid58879 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59077 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59169 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59196 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59306 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59324 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59443 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59643 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59784 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59859 00:20:35.900 Removing: /var/run/dpdk/spdk_pid59931 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60022 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60098 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60137 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60167 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60229 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60328 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60761 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60813 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60864 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60880 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60958 00:20:35.900 Removing: /var/run/dpdk/spdk_pid60974 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61047 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61063 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61108 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61126 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61176 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61195 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61323 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61359 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61433 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61743 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61755 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61792 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61811 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61826 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61851 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61864 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61884 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61910 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61922 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61939 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61963 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61977 00:20:35.900 Removing: /var/run/dpdk/spdk_pid61998 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62017 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62036 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62057 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62076 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62095 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62116 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62154 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62174 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62209 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62273 00:20:35.900 Removing: /var/run/dpdk/spdk_pid62302 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62311 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62345 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62360 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62368 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62410 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62431 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62465 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62475 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62490 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62499 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62511 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62524 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62534 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62548 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62581 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62609 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62624 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62657 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62667 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62676 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62721 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62738 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62770 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62783 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62785 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62798 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62811 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62823 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62826 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62839 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62913 00:20:36.159 Removing: /var/run/dpdk/spdk_pid62966 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63082 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63115 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63160 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63182 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63204 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63224 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63261 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63277 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63351 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63374 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63429 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63496 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63562 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63596 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63688 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63736 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63768 00:20:36.159 Removing: /var/run/dpdk/spdk_pid63992 00:20:36.159 Removing: /var/run/dpdk/spdk_pid64084 00:20:36.159 Removing: /var/run/dpdk/spdk_pid64118 00:20:36.159 Removing: /var/run/dpdk/spdk_pid64459 00:20:36.159 Removing: /var/run/dpdk/spdk_pid64497 00:20:36.159 Removing: /var/run/dpdk/spdk_pid64788 00:20:36.159 Removing: /var/run/dpdk/spdk_pid65201 00:20:36.159 Removing: /var/run/dpdk/spdk_pid65470 00:20:36.159 Removing: /var/run/dpdk/spdk_pid66251 00:20:36.159 Removing: /var/run/dpdk/spdk_pid67067 00:20:36.159 Removing: /var/run/dpdk/spdk_pid67189 00:20:36.159 Removing: /var/run/dpdk/spdk_pid67257 00:20:36.159 Removing: /var/run/dpdk/spdk_pid68519 00:20:36.159 Removing: /var/run/dpdk/spdk_pid68775 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72060 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72360 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72468 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72602 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72635 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72657 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72690 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72782 00:20:36.159 Removing: /var/run/dpdk/spdk_pid72917 00:20:36.159 Removing: /var/run/dpdk/spdk_pid73068 00:20:36.159 Removing: /var/run/dpdk/spdk_pid73143 00:20:36.159 Removing: /var/run/dpdk/spdk_pid73336 00:20:36.160 Removing: /var/run/dpdk/spdk_pid73419 00:20:36.160 Removing: /var/run/dpdk/spdk_pid73513 00:20:36.160 Removing: /var/run/dpdk/spdk_pid73818 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74228 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74236 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74510 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74530 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74544 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74577 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74588 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74878 00:20:36.160 Removing: /var/run/dpdk/spdk_pid74931 00:20:36.160 Removing: /var/run/dpdk/spdk_pid75207 00:20:36.160 Removing: /var/run/dpdk/spdk_pid75409 00:20:36.418 Removing: /var/run/dpdk/spdk_pid75789 00:20:36.418 Removing: /var/run/dpdk/spdk_pid76289 00:20:36.418 Removing: /var/run/dpdk/spdk_pid77100 00:20:36.418 Removing: /var/run/dpdk/spdk_pid77693 00:20:36.418 Removing: /var/run/dpdk/spdk_pid77695 00:20:36.418 Removing: /var/run/dpdk/spdk_pid79590 00:20:36.418 Removing: /var/run/dpdk/spdk_pid79651 00:20:36.418 Removing: /var/run/dpdk/spdk_pid79716 00:20:36.418 Removing: /var/run/dpdk/spdk_pid79773 00:20:36.418 Removing: /var/run/dpdk/spdk_pid79894 00:20:36.418 Removing: /var/run/dpdk/spdk_pid79954 00:20:36.418 Removing: /var/run/dpdk/spdk_pid80009 00:20:36.418 Removing: /var/run/dpdk/spdk_pid80069 00:20:36.418 Removing: /var/run/dpdk/spdk_pid80384 00:20:36.418 Removing: /var/run/dpdk/spdk_pid81547 00:20:36.418 Removing: /var/run/dpdk/spdk_pid81687 00:20:36.418 Removing: /var/run/dpdk/spdk_pid81935 00:20:36.418 Removing: /var/run/dpdk/spdk_pid82477 00:20:36.418 Removing: /var/run/dpdk/spdk_pid82639 00:20:36.418 Removing: /var/run/dpdk/spdk_pid82797 00:20:36.418 Removing: /var/run/dpdk/spdk_pid82894 00:20:36.418 Clean 00:20:41.685 07:46:07 nvmf_dif -- common/autotest_common.sh@1451 -- # return 20 00:20:41.685 07:46:07 nvmf_dif -- common/autotest_common.sh@1 -- # : 00:20:41.685 07:46:07 nvmf_dif -- common/autotest_common.sh@1 -- # exit 1 00:20:42.260 [Pipeline] } 00:20:42.284 [Pipeline] // timeout 00:20:42.292 [Pipeline] } 00:20:42.314 [Pipeline] // stage 00:20:42.321 [Pipeline] } 00:20:42.325 ERROR: script returned exit code 1 00:20:42.326 Setting overall build result to FAILURE 00:20:42.346 [Pipeline] // catchError 00:20:42.356 [Pipeline] stage 00:20:42.358 [Pipeline] { (Stop VM) 00:20:42.373 [Pipeline] sh 00:20:42.652 + vagrant halt 00:20:45.928 ==> default: Halting domain... 00:20:52.564 [Pipeline] sh 00:20:52.845 + vagrant destroy -f 00:20:56.124 ==> default: Removing domain... 00:20:56.134 [Pipeline] sh 00:20:56.412 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:56.422 [Pipeline] } 00:20:56.442 [Pipeline] // stage 00:20:56.449 [Pipeline] } 00:20:56.468 [Pipeline] // dir 00:20:56.474 [Pipeline] } 00:20:56.493 [Pipeline] // wrap 00:20:56.500 [Pipeline] } 00:20:56.516 [Pipeline] // catchError 00:20:56.527 [Pipeline] stage 00:20:56.530 [Pipeline] { (Epilogue) 00:20:56.546 [Pipeline] sh 00:20:56.827 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:58.738 [Pipeline] catchError 00:20:58.740 [Pipeline] { 00:20:58.754 [Pipeline] sh 00:20:59.035 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:59.035 Artifacts sizes are good 00:20:59.043 [Pipeline] } 00:20:59.061 [Pipeline] // catchError 00:20:59.072 [Pipeline] archiveArtifacts 00:20:59.079 Archiving artifacts 00:20:59.324 [Pipeline] cleanWs 00:20:59.334 [WS-CLEANUP] Deleting project workspace... 00:20:59.334 [WS-CLEANUP] Deferred wipeout is used... 00:20:59.340 [WS-CLEANUP] done 00:20:59.342 [Pipeline] } 00:20:59.359 [Pipeline] // stage 00:20:59.367 [Pipeline] } 00:20:59.383 [Pipeline] // node 00:20:59.390 [Pipeline] End of Pipeline 00:20:59.465 Finished: FAILURE